id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
14,467
2,023
"Nvidia became a $1 trillion company thanks to AI. Look inside its lavish 'Star Trek'-inspired HQ  | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/nvidia-became-a-1-trillion-company-than"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia became a $1 trillion company thanks to AI. Look inside its lavish ‘Star Trek’-inspired HQ | The AI Beat Share on Facebook Share on X Share on LinkedIn Nvidia Voyager park and walkway - Gensler | Jason Park Photography Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Over a million square feet across two massive steel and glass structures. Hundreds of conference rooms named after Star Trek places, alien races and starships, as well as astronomical objects — planets, constellations and galaxies. Acres of greenery and elevated “birds’ nests” where people can work and meet. A bar called “Shannon’s” with a panoramic view and plenty of table space for board games. This is the nearly $1 billion headquarters of Nvidia in Santa Clara, California — located on a patch of prime Silicon Valley land where the technology company has spent the past three decades growing from a hardware provider for video game acceleration to a full-stack hardware and software company currently powering the generative AI revolution. But amid the lavish architecture and the fun perks, it can be difficult to discern the hard work and intense pressure that supported Nvidia’s entrance into the $1 trillion valuation club last month, alongside fellow tech giants Alphabet, Amazon, Apple and Microsoft. As I walked the equivalent of a winding Yellow Brick Road to the main entrance, with a view of the towering curves and lines of the two buildings rising over the San Tomas Expressway, I wondered whether I’d get a peek behind the PR curtain — at Nvidia’s true nature. ‘Where’s Jensen?’ “Where’s Jensen?” I asked Anna Kiachian, the Nvidia PR manager who had arranged my campus visit. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The truth is, I hadn’t expected to get an audience with Nvidia CEO and cofounder Jensen Huang. For all I knew, Huang had been relaxing in the Maldives ever since Nvidia became a Wall Street darling this spring in the wake of the generative AI boom — 10 years after helping to power the deep learning “revolution” of a decade ago. Industry analysts estimate that Nvidia’s dominance extends to over 80% of the graphical processing unit (GPU) market, and GPUs are a must-have for every company running AI models, from OpenAI down to the smallest startup. Still, I figured a random sighting of Huang’s ubiquitous black leather jacket — from afar — was possible. “I’m not sure,” Kiachian replied with a conspiratorial smile as we strolled through an immense atrium with hundreds of triangular skylights gleaming overhead. But she emphasized that Jensen came into the office every day when he was in town: “So you never know!” Luckily, the sight lines were excellent for Jensen-watching, especially since the headquarters’ two buildings — Endeavor, which opened in 2017, and Voyager, which debuted in 2022 (both named after Star Trek starships)— were hardly filled to capacity. There were obviously plenty of Nvidia employees still working at home or on summer vacation, leaving plenty of white space against which to spot one black leather jacket. But if any space could lure people back to the office, this is it: Endeavor and Voyager cost a whopping $920 million to build — a small price to pay, apparently, to meet Huang’s vision of giving every employee a view while boosting collaboration and random connections. Designed by architecture firm Gensler , which built the largest skyscraper in China, these headquarters are anything but a claustrophobic maze of hallways, cubicles and data centers. Instead, I felt like I could spot Jensen from a half-mile away across the sprawling, soaring, angular expanse. There wasn’t much time for searching, however. I was on a strict schedule of meetings, beginning with a campus tour led by Jack Dahlgren, who heads up developer relations for Nvidia Omniverse but also served as project and design manager for the buildings. As I racked up steps on my Fitbit, Dahlgren interjected fun facts, like how people kept getting lost searching for conference rooms in Endeavor because their order was understood only by the most devoted sci-fi nerds and there was little signage (Dahlgren said Jensen felt a large map would clutter the landscape). The newer Voyager, he explained, has them in alphabetical order. The triangular design of the two buildings, he continued, is repeated in the triangles throughout the roof and floor plans, which were computationally designed with an algorithm. “Triangles represent the building blocks of all 3D graphics,” he said. There are also hidden metaphors: For example, Endeavor’s core can be seen as a tree trunk, with branches spread out from the center. It’s very noisy and busy in the middle, while around the outside are relaxed and quiet common spaces. Voyager, on the other hand, with its many noisy, whirring labs in the center, called “The Mountain,” has public spaces spread over the top (with “Shannon’s” bar at the pinnacle), featuring views facing Silicon Valley and the mountains beyond it. Jensen Huang’s presence looms large at Nvidia Huang, a native of Taiwan whose family emigrated to the U.S. when he was just four years old, co-founded Nvidia in 1993 with the goal of building graphics chips for accelerated computing — first for gaming, and then, it turned out, for AI. These days, Nvidia is as much, if not more, of a software company as a hardware company, with a full-stack ecosystem that began nearly two decades ago by building CUDA (compute unified device architecture), which put general-purpose acceleration into the hands of millions of developers. Today, experts see little chance of anyone catching Nvidia when it comes to AI compute dominance, with the largest companies with the deepest pockets battling for access to Nvidia’s latest H100 GPUs. Whether he is in the office or not, it’s clear that Huang’s presence looms large around every corner. He seems to serve as founder, fatherly figure and as a sort of revered Star Trek captain. The phrase “Jensen says” is commonly uttered, whether it is quotes from his many inspirational speeches around strategy and culture, or his emphasis on a “first principles” approach — kind of a mission statement for each project. “Jensen says the mission is the boss,” said Dalhgren. For example, the mission was to build the headquarters, he explained. But no one was the boss of the project. Groups came together, he explained, and the project itself was the boss. That seemed a bit hard to believe — Huang certainly seemed like the boss. For a previous piece I wrote about Nvidia, an analyst told me that Huang is seen as demanding. There were graphics engineers at other tech companies who were “renegades” from Nvidia, he said — who left because they couldn’t handle the pressure. Still, Nvidia prides itself on its lack of hierarchy — other than Huang at the helm. One of the most important in the “everyone else besides Jensen” camp is Chris Malachowsky, one of Huang’s two co-founders who now serves as SVP for engineering and operations. In one of those “random connections” moments, Kiachian gave an excited little leap when she realized he was walking towards us, and gave me a warm introduction. When I asked him what he thought of the new campus, Malachowsky said it “boggled his imagination” and went on to quote one of Huang’s oft-repeated themes: “I know it seems absurd, but we think of ourselves as a startup,” he said. “Jensen used to say we were always 30 days from going out of business, so to actually be confronted with what not going out of business means is flattering and nice, I can honestly just say ‘wow.’” Nvidia’s hardworking AI chips Malachowsky’s mellow vibe did not extend, however, to the windowless lab that concluded my campus tour — a cold, noisy, claustrophobic space where Nvidia’s AI chips were being tested. Dahlgren pointed out that the basic principles for the chip designs were also used in the building’s designs. “Before we send the chip off to the fab to get built, we do pre-silicon emulation — we test it with a supercomputer which emulates how the silicon and the wires will work when it’s put together,” he said. “We did the same thing when we built the model of the building — we simulated how light would flow, we measured that, we came to an understanding of how it would perform before we built it.” I thought of that when I saw examples of the chips in a museum-like demo room, from a $69 graphics card to the $40,000 H100 cluster — a thousand of which built OpenAI’s ChatGPT. The glossy, glimmering metal squares, rectangles and boxes were truly beautiful, disguising the massive workloads they take on to power today’s LLMs. They reminded me of Nvidia HQ’s shimmering skylights, uplifting views and bold, geometric design — which belie the late nights, drudgery and frustration that, I felt, must also be part of the company’s success algorithm. Beneath Nvidia’s glossy surface The Nvidia cafeteria was filled with hungry staffers by early afternoon. Kialchia pointed out that Jensen had decided to close the Endeavor cafeteria so everyone had to come to the one in Voyager — creating even more random connections for employees. So there were actual lines at the salad bar. Kialchia also pointed to a sign which said today was Popcorn Thursday, which, she noted with a laugh, was a surprisingly big deal at Nvidia. Highly-paid developers, apparently, can still love a freshly-popped bag of popcorn. As I munched my popcorn, I couldn’t help but wonder if that’s where I’d have to look to see beneath the surface of Nvidia: At the people. No matter how beautiful the campus, how positive the culture and how passionate the founder, doesn’t it still take people who work hard and set high standards and don’t always get along to get ahead? But that was hard to suss out on my tour: During my walk around Endeavor and Voyager, for example, Kiachian had decreed that what I thought was a funny anecdote from Dahlgren was off the record. It was something totally silly, just a memory of how Nvidia didn’t always have such a cushy campus. It was nixed, I suppose, because it didn’t fit Nvidia’s happy-go-lucky narrative. Dahlgren, for his part, brushed it off, saying that everyone at Nvidia seemed to have a sense of humor, even if it occasionally veered towards the dark side. “Some of it is dark humor, because work is hard,” he said. “But it’s rewarding.” As I ended my day at Nvidia, I realized that I never got my Jensen sighting. I wasn’t disappointed — I thoroughly enjoyed my brief landing on Planet Nvidia. But I wish I could have gotten more of a sense of the blood, sweat and tears that is undoubtedly required to build AI’s most famous picks and shovels. Still, the company’s dreamy culture of inspiration, illuminated by Endeavor and Voyager’s dramatic architecture and jaw-dropping hardware, is hard to resist. And I have a hunch Nvidia will live long and prosper. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,468
2,023
"Cerebras unveils world's largest AI training supercomputer with 54M cores | VentureBeat"
"https://venturebeat.com/ai/cerebras-unveils-worlds-larges-ai-training-supercomputer-with-54m-cores"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cerebras unveils world’s largest AI training supercomputer with 54M cores Share on Facebook Share on X Share on LinkedIn Cerebras' Condor Galaxy-1 AI Supercomputer has 54 million cores. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cerebras Systems , the AI accelerator pioneer, and UAE-based technology holding group G42 , have unveiled the world’s largest supercomputer for AI training, named Condor Galaxy. The network of nine interconnected supercomputers promises to reduce AI model training time significantly, with a total capacity of 36 exaFLOPs, thanks to the first AI supercomputer on the network, Condor Galaxy 1 (CG-1), which has 4 exaFLOPs and 54 million cores, said Andrew Feldman, CEO of Cerebras , in an interview with VentureBeat. Rather than make individual chips for its centralized processing units (CPUs), Cerebras takes entire silicon wafers and prints its cores on the wafers, which are the size of pizza. These wafers have the equivalent of hundreds of chips on a single wafer, with many cores on each wafer. And that’s how they get to 54 million cores in a single supercomputer. In our interview, Feldman said, “AI is not just eating the U.S. AI is eating the world. There’s an insatiable demand for compute. Models are proliferating. And data is the new gold. This is the foundation.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With this supercomputer, you get results twice as fast, using half the energy, said Feldman. “We’re the largest in the world. We’ve sold it to a company called G42, based in Abu Dhabi. We deployed it in Santa Clara, California and are currently running AI work,” Feldman said. “We manage and operate it through our cloud. It’s used by G42 for internal work and any excess capacity is resold by them or by us. This is the first of three U.S.-based supercomputers we intend to build for them in the next year. And the first nine, we intend to build for them in the next 18 months. And when these nine are connected, that will be a 36 exaflop constellation of supercomputers.” Condor Galaxy is the name of the supercomputer, which scales from one to 32 CS-2 computers made possible by the company’s Memory X and Swarm X technology. The machine was stood up in Santa Clara in 10 days and it’s already one of the largest supercomputers in the world, Feldman said. The second machine will be in Austin Texas and the third one will be in Asheville, North Carolina. Phase two’s deal value is in excess of $100 million. “It’s pretty crazy. When we’re done, we will have nine supercomputers, each of four exaFLOPs interconnected to create a distributed 36 exaFLOP AI constellation. That’s nearly 500 million cores across 576 CS-2s with 3,490 terabytes of internal bandwidth. And we will need more than half a billion AMD Epyc cores just to feed us data.” Cerebras and G42 will deploy two more such supercomputers, CG-2 and CG-3, in the U.S. in early 2024. With this unprecedented supercomputing network, they plan to revolutionize AI advancement globally. Located in Santa Clara, California, CG-1 links 64 Cerebras CS-2 systems together into an easy-to-use AI supercomputer with a training capacity of 4 exaFLOPs, which is offered as a cloud service. CG-1 is designed to enable G42 and its cloud customers to train large, ground-breaking models quickly and easily, thereby accelerating innovation. The Cerebras-G42 strategic partnership has already advanced state-of-the-art AI models in Arabic bilingual chat, healthcare, and climate studies. CG-1 offers native support for training with long sequence lengths, up to 50,000 tokens out of the box, without any special software libraries. Feldman said that programming CG-1 is done entirely without complex distributed programming languages, and even the largest models can be run without weeks or months spent distributing work over thousands of GPUs. The partnership between G42 and Cerebras delivers on all three elements required for training large models: huge amounts of compute, vast datasets, and specialized AI expertise. They are democratizing AI, enabling simple and easy access to the industry’s leading AI compute, and G42’s work with diverse datasets across healthcare, energy, and climate studies will enable users of the systems to train new cutting-edge foundational models. Cerebras and G42 bring together a team of hardware engineers, data engineers, AI scientists, and industry specialists to deliver a full-service AI offering to solve customers’ problems. This combination will produce groundbreaking results and turbocharge hundreds of AI projects globally. G42 is a conglomerate in Abu Dhabi with 22,000 employees across nine companies in 25 countries. “Now, if you want to run a same model with 40 billion parameters on 1,000, GPUs, you have to write an additional 27,215 lines of code. Obviously, that’s not easy,” Feldman said. “Now, Cerebras with a 1 billion parameter model takes about 1200 lines of code to put it on one CS-1. But if you want to run a 40 billion parameter model, or 100 billion parameter model, you use the same 1200 lines of code. That’s it. And so you don’t have to write 27,215 lines of code.” “Now this takes our cloud to a new level where we’re operating and running. We’re making them available through our cloud. We’re offering AI supercomputers a service. If you want normal AI clusters, we have those too. This really takes our cloud to a new level.” The machine is named after the Condor Galaxy, which is about five times larger than our own Milky Way. Cereabras now has about 335 people and it’s “hiring like crazy,” Feldman said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,469
2,022
"Cowbell raises $100M to offer organizations continuous cyber insurance | VentureBeat"
"https://venturebeat.com/business/cowbell-raises-100m-to-offer-organizations-continuous-cyber-insurance"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cowbell raises $100M to offer organizations continuous cyber insurance Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, cyber insurance provider Cowbell Cyber announced that it had closed a Series B funding round of $100 million for its continuous underwriting platform. The solution uses AI to assess risks in the environments of small to midsized enterprises and then offers coverage against those vulnerabilities. By continuously monitoring for threats, the solution provides the companies flexible insurance coverage that can keep up with the evolving risks of a dynamic enterprise environment. This means enterprise and decision makers can monitor their exposure to cyber threats 24/7, and scale their coverage to ensure they’re financially prepared to mitigate security incidents and breaches. A dynamic way to mitigate cyber risk Cowbell Cyber’s announcement comes as data breaches and cyberattacks become increasingly difficult to prevent, and organizations look to cyber insurance solutions to protect themselves from financial impact of data breaches. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With research finding that the average total cost of a data breach is $4.24 million, many organizations are recognizing that a lack of preparation could put their business under serious financial strain, or out of action altogether. Fortunately, continuous cyber insurance provides enterprises with a solution that can decrease the amount they spend on remediating cyber threats. “The past two years have seen a rapid increase in cyber incidents, led by ransomware attacks and in general evolving threat landscape due to the COVID-19 pandemic and, more recently, the Russia-Ukraine war. The attack surface has also broadened as a result of migration to cloud and offline-online initiatives,” said the founder and CEO of Cowbell Cyber, Jack Kudale, in an exclusive interview. “Today we use more than 1,000 data points and risk signals on each account to benchmark their risk profile against our risk pool of 23 million businesses, or about 70% of the SME market in the U.S. This is exactly how we bring more transparency – brokers, policyholders work off of the same data – in underwriting for cyber and help policyholders understand how their cyber policy is designed,” Kudale said. The fight to offer scalable cyber insurance Cowbell Cyber is part of the cybersecurity Insurance market , which was valued at $9.29 billion in 2021, and is estimated to reach $28.25 billion by 2027 as the advancement of digitalization and cloud computing makes it more difficult for security teams to secure their environments. Today, the provider is competing with a range of traditional cyber insurance carriers and insurtech companies. One of the organization’s main competitors is Cyber insurance provider Coalition , currently valued at $3.5 billion, which offers an active insurance solution with real-time risk assessments and continuous underwriting. Another is Resilience , a cyber insurance company which recently closed $80 million in a Series C funding round to cater to mid-market organizations with holistic cyber insurance packages, loss mitigation services and incident response planning. In the future, Kudale argues that Cowbell will differentiate itself from other providers by offering the definitive end-to-end cyber insurance management in one place. “Our vertically integrated platform combines, in one system, every insurance process: application, risk rating, underwriting, policy management, claims management, risk aggregation, broker portal and more. Every stakeholder has access to the same information,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,470
2,022
"Arcion now reads logs from Oracle, promises 10x faster data replication | VentureBeat"
"https://venturebeat.com/data-infrastructure/arcion-now-reads-logs-from-oracle-directly-promises-10x-faster-data-replication"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Arcion now reads logs from Oracle, promises 10x faster data replication Share on Facebook Share on X Share on LinkedIn Concept illustration depicting "data replication" Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. California-based Arcion (formerly Blitzz), which offers a fully managed platform to replicate transactional data to cloud-based data platforms in real-time, is making data extraction from Oracle databases faster with a new native log reader. The capability, part of Arcion’s latest release, enables enterprises to read logs from their Oracle instance directly during replication, eliminating the need to use Logminer or other less effective or efficient sources. According to the company, this, combined with its distributed and parallel architectural design, ensures unlimited scalability and 10 times faster data extraction to target platforms such as Databricks , Snowflake , MySQL, PostgreSQL, SingleStore and Yugabyte. “Arcion is the only end-to-end multithreaded CDC [change data capture] solution that auto-scales vertically and horizontally. Any process Arcion runs on source and target is parallelized using patent-pending techniques to achieve maximum throughput. There isn’t a single step within the pipeline that is single-threaded. It gives Arcion users ultra-low latency CDC replication and can always keep up with the forever increasing data volume on the source. If an enterprise wants to migrate or replicate terabyte-scale data that requires high throughput, Arcion is the answer,” Gary Hagmueller, the CEO of the company, told VentureBeat. While newer data integration tools such as Airbyte, Debezium, StreamSet and Kafka Connectors miss out on this feature, there are many older CDC tools (Qlik Attunity, Fivetran-acquired HVR) that do offer the capability. However, as Hagmueller pointed out, all these older solutions require material effort to both set up and manage – which is not the case with Arcion. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Making data replication easier In addition to the native reader for Oracle users, the latest Arcion release also simplifies the handling of DDL (data definition language) schema changes and data transformation for enterprises. As part of the former, the schema evolution capability of the platform has been extended to automatically capture DDL changes from a source database and replicate them in the target data platform. The feature saves data engineers from the manual trouble to keep schema aligned between source and target databases. Previously, if there was a change to the DDL or schema on the source database, they had to stop the replication process and rebuild it from scratch by snapshotting the source system. This led to downtime, wastage of expensive compute resources and chances of user error and data loss. “Oracle Golden Gate is one CDC solution that supports automatic schema evolution (DDL). But Arcion is the only CDC platform that supports out-of-the-box DDL with modern analytic warehouses like Snowflake or Databricks. Oracle Golden Gate does not provide very robust support for Snowflake and Databricks, so anyone adopting such systems will find that solution inadequate. Alternatively, the data team has to be ready to invest in manual resources to handle the schema evolution with other alternative CDC solutions,” the CEO noted. Meanwhile, to help enterprises better handle data transformations, Arcion’s introducing a zero-code feature that delivers flexible, high-performance streaming column transformations on the fly. This eliminates the need to expend engineering resources on creating a staging table (e.g., Kafka) and writing custom code to transform data on the target. The practice also led to delayed SLAs. Oracle log reader availability The Oracle log reader is currently available in beta and will see a wider rollout later this month, while the other two capabilities are now generally available as part of the fully-hosted version of Arcion. With this release, Arcion is also adding Google BigQuery and Azure-Managed SQL Server as new sources and Imply (founded by the original creators of Apache Druid) as a new target. In all, the platform supports over 20 enterprise databases and data warehouses for data replication. A few months ago, the company also raised $13 million in series A funding at a valuation of $65 million. “The data replication and protection software market showed much greater-than-expected resilience in 2020 despite the pandemic,” Phil Goodwin, research director at IDC’s infrastructure systems, platforms and technologies group, said. “We expect this market to return to its normal growth pattern, with a 2.7% CAGR through 2025. The public cloud services portion of the market is the bright spot, with an expected 11.6% CAGR during that time.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,471
2,023
"Midjourney's first mobile app is here...sort of | VentureBeat"
"https://venturebeat.com/ai/midjourneys-first-mobile-app-is-here-sort-of"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Midjourney’s first mobile app is here…sort of Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Bootstrapped startup Midjourney offers a popular text-to-image AI generator from its own foundation model. Since its beta launch in the summer of 2022, it has built a thriving community of users on its official server within the separate messaging app Discord — exceeding 16 million at the time of this article’s publication, including VentureBeat (we use it and other AI art generators to illustrate articles). To this day, more than a year later, Discord remains the primary means by which users can interact with Midjourney — they simply type in a text prompts to the Midjourney Bot in Discord, which in turn, produces quartets of generated images for the user to choose from, download, remix, edit sections of, pan over, or otherwise iterate on. However, as of this week, Midjourney is now available in its own mobile app…sort of. According to founder David Holz , a former co-founder of Leap Motion and NASA researcher, who spoke at Midjourney’s regular weekly “Office Hours” audio conference held within Discord, Midjourney partnered with engineers from Japanese game company Sizigi Studios to launch Niji Journey , an Android and iOS app. Holz said Niji Journey was built for the Japanese market in particular and was focused on providing images using Midjourney’s anime art style setting, also known as “Niji.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The journey to Niji Journey The Niji Journey app is available for download now on both the Google Play and Apple App Stores for free with in-app purchases. It still requires a paid subscription through Midjourney to use (the cheapest tier is $8 a month paid as a lump sum annual payment of $96, or $10 a month paid monthly). Existing Midjourney subscribers can log into it using their Discord credentials without paying more. Holz told the attendees of the Midjourney Office Hours call today that the Midjourney team — which consists of only a few dozen employees — had decided to partner with Sigizi Studios (known variously as Spellbrush and Waifu Labs , makers of the AI anime game Arrowmancer ), because Japanese users were more accustomed to interacting with mobile apps than desktop for software, and had found Midjourney’s Discord Bot to be not as user friendly as they would like. Holz noted that the Niji Journey app was able to produce non-Niji images in Midjourney, allowing users to access the entire range of art styles available through the AI generator, simply by selecting “v5” in the Niji Journey app’s settings — something observed by users on X (formerly Twitter). Midjourney's first mobile app is officially out! It's called "Niji・Journey" but you can generate standard images if you select v5 from the settings. A great opportunity to see Midjourney's plans for mobile. pic.twitter.com/MGyPsN1dTi Holz said that Niji Journey was intentionally more information-dense and “busy” in its user interfaces than the Midjourney Discord implementation or other “Western” app design conventions, because the Japanese market prioritized granular controls over simplicity or minimalism. A full-fledged Midjourney app is coming…someday Holz invited existing Midjourney users to try out Niji Journey and provide feedback. Yet, he was clear in saying that Midjourney planned to ultimately release its own stand-alone app one day, but did not provide a timeline as to when it could be expected. He mentioned that that mobile app economics at present did not typically support services as expensive as Midjourney, and said he and his team were thinking about how best to create an onboarding experience for new users should they release a stand-alone mobile app. During his talk, Holz mentioned that Midjourney planned to release two major new features this week — a native upscaler for increasing the resolution to 4000 by 4000 pixels, as well as a personalized style setting so that users could make their own style to apply to all image generations going forward, similar in a way to the custom instructions available in OpenAI’s ChatGPT. He cautioned that both features were still experimental and had not performed up to the team’s high standards, but were still good enough to be released soon and would hopefully be enjoyable and useful to Midjourney’s subscribers. Holz further stated that Midjourney would be pushing out an upgrade to its website soon as well, providing more features for viewing generated images and sharing them on social. He said this update would occur in two phases. Midjourney has gained a big following despite the technical hoops that users must go through to get started, in part because the imagery it generates is so high quality and interesting. But with the rise of high-quality image generating AI alternatives including OpenAI’s DALL-E 3 baked into ChatGPT Plus , and Adobe’s recent release of Firefly Image 2 , not to mention competitors such as Ideogram that can produce typography far more reliably and accurately than Midjourney, the launch of the Niji Journey mobile app could not come sooner. And it remains to be seen if Midjourney’s general stagger-step, iterative, endearingly scrappy approach toward releasing new user-facing products and services will allow it to maintain the impressive userbase it has built so far, especially with the pressure of more well-funded and easier-to-use competition. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,472
2,023
"Ideogram launches AI image generator with impressive typography | VentureBeat"
"https://venturebeat.com/ai/watch-out-midjourney-ideogram-launches-ai-image-generator-with-impressive-typography"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Watch out, Midjourney! Ideogram launches AI image generator with impressive typography Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Ideogram Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Earlier this week, a new generative AI image startup called Ideogram , founded by former Google Brain researchers, launched with $16.5 million in seed funding led by a16z and Index Ventures. Another image generator? Don’t we have enough to choose from between Midjourney , OpenAI’s Dall-E 2 , and Stability AI’s Stable Diffusion ? Well, Ideogram has a major selling point, as it may have finally solved a problem plaguing most other popular AI image generators to date: reliable text generation within the image, such as lettering on signs and for company logos. The company offers a number of preset image generation styles on its web app at ideogram.ai , including one labeled “typography,” which renders lettering in different colors, fonts, sizes and styling. Other preset styles include 3D rendering, cinematic, painting, fashion, product, illustration, conceptual art, ukiyo-e and others. You can select multiple styles at once and apply them all. Ideogram is already available for signup in beta. And its Discord server and web app are already filled with examples of people generating lettering and images with lettering that are impressive compared to the current state-of-the-art (though not always entirely accurate) options. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, Ideogram also lacks some of the other features available on rival image generators like zoom out/outpainting, and its results were less consistent in our tests. It even had difficulty rendering its own name, “Ideogram,” and was better at rendering more common words. The company took the occasion of its launch and beta release to subtly highlight this feature with a post on X (formerly Twitter) including its mission statement, “help people become more creative,” generated using its tool. We're excited to announce the formation of Ideogram AI today! Our mission is to help people become more creative through Generative AI. https://t.co/ncHNI2vXfF pic.twitter.com/JtVAzpgpWl Other investors in Ideogram include AIX Ventures, Golden Ventures, Two Small Fish Ventures, and industry experts Ryan Dahl, Anjney Midha, Raquel Urtasun, Jeff Dean, Sarah Guo, Pieter Abbeel, Mahyar Salek, Soleio, Tom Preston-Werner and Andrej Karpathy. The Toronto-based startup has already earned shoutouts from fellow AI notables including David Ha, founder of Sakana AI and Margaret Mitchell , both of whom also worked for Google. A new startup founded by former members of the Imagen team at Google Brain ? https://t.co/6kI4nw6GJ7 H/T @hardmaru : A new startup founded by former Imagen people at Google Brain?. I'm SUPER FASCINATED by this, as the "creativity" landscape in AI is one w so many paths forward. Let alone, there's huge risk of paths that DECREASE creativity. So curious what they're up to! https://t.co/hDlTQCAL2a While it’s still early days for Ideogram, differentiating by offering a reliable typographic generator is a smart move and may help it appeal to graphic designers or those who would otherwise have to hire them to create imagery with eye-catching text baked in. And other AI image generators are continuing to add new features, too. Just this week, Midjourney launched its new “vary region” feature to add, remove, and subtract portions of generated imagery. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,473
2,015
"Look at all the psychedelic art people are creating with Google's DeepDream AI code | VentureBeat"
"https://venturebeat.com/business/look-at-all-the-psychedelic-art-people-are-creating-with-googles-deepdream-ai-code"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Look at all the psychedelic art people are creating with Google’s DeepDream AI code Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Just a few days after Google open-sourced its code for generating trippy images using artificial intelligence , you can now explore a subreddit page on social discussion board site Reddit that’s specifically devoted to sharing pictures created with DeepDream. Reddit user UngerUnder announced the establishment of the subreddit yesterday. Hat tip to Will Knight of the MIT Technology Review for tweeting out a link to the subreddit this morning. The subreddit, which goes by the name DeepDreaming, is a veritable shrine to machines tripping on data. It’s already a riot to look through. The top spot is currently held by “The Dog Is Watching,” a patchwork of many, many dog heads and too many eyes to count. Above: “The Dog Is Watching” Below that is “Matchstick Crane,” a tall structure decorated with dog heads. Eyes patiently look on from the sky in the background. Above: “Matchstick Crane” There is also “Open Wide,” a mysterious tree stump composed of dogs. Above: “Open Wide” Not that r/deepdreaming is the only place to explore DeepDream art. Over the past few days, people have been tweeting out their DeepDream creations with the hashtag #deepdream. Flickr is chock full of DeepDream images, too. There’s even a page on the website Know Your Meme dedicated to DeepDream. But now people can share, explain, and critique DeepDream creations on a dedicated subreddit. Note: Please do not hold us responsible if you throw up while viewing the images. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,474
2,023
"Metaphysic PRO offers copyright & monetization of AI digital twins | VentureBeat"
"https://venturebeat.com/business/metaphysic-pro-wants-performers-to-copyright-manage-and-monetize-their-digital-twins"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Metaphysic PRO wants performers to copyright, manage, and monetize their digital twins Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Metaphysic , the startup founded in London, UK, on a tech demo used to create convincing deepfakes of Tom Cruise that went viral on social media app TikTok, is taking its tech and business offerings to the next level. The company this week announced the launch of Metaphysic PRO , a new tier to its current AI-powered digital twinning technology that aims to help performers secure the copyright over, and monetize their digital likeness. According to an email provided exclusively to VentureBeat from Metaphysic CEO and co-founder Tom Graham, the new PRO tier aims to “help people build a portfolio of their important data assets for the purpose of 1. creating generative AI content in the future, specifically tailored to generative AI content and 2. empower them to own and control the fundamental building blocks of their AI likeness — that being the data used to train the models beyond face voice and performance, how you move, etc.” Put another way, Metaphysic PRO wants to offer the technology platform for people to store and manage all of the many different types of digital files that would be needed to recreate a person digitally, in 3D, for the purposes of performances and interactivity — including the audio files of their voice, movements via motion capture, even their favorite catchphrases, conversational topics, and the spaces they inhabit. “Metaphysic PRO helps people store that data, and then on top of that helps people manage the process of giving that data to third parties for the use,” wrote Graham to VentureBeat. Essentially, Metaphysic PRO is offering a content, rights management and monetization system for the digital self. Software vs. hardware vs. content management What kinds of data are needed to create a fully digital likeness of someone? Metaphysic’s website FAQ offers some insight: “If we are trying to create an AI model of a person’s face — then we need substantial visual data, in the form of videos and images, of that person’s face — from all angles and in different lighting conditions. The AI algorithms are very smart, but still require real world imagery to learn from. A good size training dataset for a face model might be made up of at least 5 minutes of good quality video of the person.” Metaphysic’s chief breakthrough is the AI software that processes this video and turns it into a 3D digital reproduction, as well as records the performer’s voice and interprets it to be used for future speaking or signing without having the original performer have to do it themselves. As the FAQ states, “Metaphysic has developed special AI scans that focus on capturing every detail of how your face moves and how your expressions are formed when talking, laughing and acting naturally. This data is used by the AI to learn how you look in any situation and render that in content. You can not create good AI models using data from traditional 3D scans.” Yet in order to even get to the point that Metaphysic’s software can create these AI models, you first need to scan a person using specialized hardware — not just any regular video will do. Metaphysic did not specify if it creates any hardware for this purpose — such as the 360-degree camera arrays used by other 3D scanning companies. However, Graham did tell VentureBeat that the PRO tier “will include a full professional studio scan storage of large amounts of data.” High-profile celebrities already onboard The company says it already counts celebrity customers including Tom Hanks and his actor spouse Rita Wilson; Anne Hathaway; Octavia Spencer; Paris Hilton; and the athlete and model Maria Sharapova. However, despite these high-profile celebrity endorsements, controversy remains around the use of 3D scanning and AI technologies in Hollywood and the entertainment industry. In particular, background actors (also known as “extras”) have expressed concerns that they have already been scanned on film sets and their likeness signed away to film studios to be used however they wish, potentially eliminating the need to hire the actors for more than a single day of work. Metaphysic, for its part, believes its approach allows actors to retain control over their likeness — giving them the power and legal right to do with it what the actor sees fit. “We are trying to empower people to fight unauthorized, deep fakes posted on internet platforms around the world,” Graham wrote. This is especially an issue for performers in Hollywood and the adult industry, who are already having their likenesses deepfaked and cloned by other AI tools for unauthorized uses. Copyrighting the digital self One of the most contentious issues around generative AI broadly has been that of copyright. While U.S. copyright law has for most of the nation’s history sought to protect human creative works (initially maps and charts, then gradually expanded to art and other creative products), the U.S. Copyright Office has recently and repeatedly ruled that AI generated work is not eligible for copyright because it was not created primarily by a human being. How then, does Metaphysic plan to allow actors and performers to copyright their digital likenesses made through the help of AI? “We designed the process of creating photorealistic AI likenesses of people directly in response to these exact circulars and missives from the [U.S.] Copyright Office,” wrote Graham to VentureBeat in his email. He continued: “Basically, at every step along the way, we insert significant human effort and work and very significant control from the person who is creating it, along with Metaphysic, helping them on a work-for-hire basis…and that I believe is the same as creating a character no different than if you were creating it on Photoshop or designing it yourself using technology. So that’s the premise and that’s why it’s different than some of the other responses to people trying to copyright AI generated content.” Metaphysic’s argument in favor of people being able to copyright their AI-generated likeness is, as Graham puts it, there is enough human labor in the process to warrant it. But also, that the resulting generated character is unique because it is of that unique person themselves. “This photorealistic AI version is a piece of manmade work,” he wrote. “It’s like a character. It’s just a character that happens to look exactly like you.” This argument has yet to be tested in a court case, but it will of course be interesting to find out if it holds up. Separately but relatedly, it may come down to the Supreme Court of the U.S. to decide the specifics around whether popular AI programs themselves violated copyright by using copyrighted materials to train on (though Metaphysic is not among those vendors accused of doing so). Pricing, security features, and availability Metaphysic is pursuing a subscription model for its AI digital twin management service in the range of “$8,000 to $10,000 per year,” according to Graham, depending on the size of the scan and different assets created to support it. That’s likely beyond the scope of most working background actors/extras, some of whom are already running out of money due to the ongoing Hollywood strikes. The company did not specify to VentureBeat if it intends to take a cut of any AI digital twins that are licensed out using its software. It says users ultimately their Metaphysic PRO files, but Metaphysic will create and store them using “enterprise-grade security end-to-end,” including “raw image and audio data from any AI data scans or recordings,” that Metaphysic conducts, or that the user uploads themselves from other third-party scanners and sources. It further says it encrypts its data of user’s likenesses and offers two-factor authentication to access it, and will delete all data and accounts upon request of an authorized user. Right now, Metaphysic PRO is available on an invitation-only basis, though anyone can apply for an invitation on its website. “Every person with a large audience or fan base should be proactive in protecting their brand and IP from bad actors that want to use non-consensual deepfakes and photorealistic AI avatars to exploit their likeness,” the company states. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,475
2,023
"Meet the AI creative: Senior product designer Nicolas Neubert, creator of sci-fi movie trailer 'Genesis' | VentureBeat"
"https://venturebeat.com/ai/meet-the-ai-creative-senior-product-designer-nicolas-neubert-creator-of-sci-fi-movie-trailer-genesis"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meet the AI creative: Senior product designer Nicolas Neubert, creator of sci-fi movie trailer ‘Genesis’ Share on Facebook Share on X Share on LinkedIn Credit: Nicolas Neubert Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nicolas Neubert did not set out to make international news, but that’s exactly what happened after he sat down at his home desktop computer in Cologne, Germany, near the end of June 2023 and began playing around with Gen2 , a new generative AI video creation tool from well-funded New York City startup RunwayML. The 29-year-old senior product designer at Elli, a subsidiary of automaker giant Volkswagen Group focused on electrification and charging experiences, had already been using his free time to generate sci-fi inspired images with Midjourney , a separate, popular text-to-image AI tool. When Neubert caught wind of Runway’s Gen2, which allows users to upload a limited number of images and converts them freely, automatically, into short 4-second animations that contain realistic depth and movement, he decided to turn some of his Midjourney imagery into a concept film trailer. ? Trailer: Genesis (Midjourney + Runway) We gave them everything. Trusted them with our world. To become enslaved – become hunted. We have no choice. Humanity must rise again to reclaim. Images: Midjourney Videos: #Runway Music: Pixabay / Stringer_Bell Edited in: CapCut pic.twitter.com/zjeU7YPFh9 He posted the result, “ Genesis ,” a thrilling, cinematic, 45-second-long video that sketches out a variation of the age-old sci-fi theme of man vs. machine — this time, with humanoid robots that have taken over the world and a human rebellion fighting back against them, reminiscent of the Terminator franchise or the upcoming major motion picture The Creator — on his account on the social network X (formerly Twitter). Neubert didn’t expect much in the way of a response, maybe some attention from the highly active community there around AI art. Instead, the trailer quickly went viral, clocking in 1.5 million views at the time of this article’s publication just a week and a half later, and earning him coverage on CNN and in Forbes. Neubert recently joined VentureBeat for an interview about his process for creating the trailer, his inspirations, his thoughts on the current debate in Hollywood and the arts over the use of AI, and what he has planned next. The following transcript of our question-and-answer (QA) session has been edited for length and clarity. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! VentureBeat: Congratulations on all your success and attention so far on the “Genesis” trailer. It seems like you’re enjoying it, and that it’s opening people’s eyes to some of the possibilities and potential with with generative AI tools. Tell me how you’re feeling about it all. Neubert: I think the feedback has been overwhelmingly positive. It was definitely not meant to blow up like this. I’m a generally curious person who likes to try out tools. When Runway announced they had an image-to-video tool, Gen2, of course I thought, ‘let’s try it out.’ I had these pictures lying around from previous Midjourney explorations which gave me a good base to get started. I told myself: ‘Why not? Let’s try to do a 60-second movie trailer and share it? What’s the worst that can happen?’ I guess people quite liked that a lot. I think it’s a great tech demo to see where we’re heading. And I think it definitely opened some discussions as to where AI can already be utilized to some extent, from a professional standpoint. Let me back up a little bit and ask you about your job. You’re at Volkswagen subsidiary, is that right? Exactly. I’ve always had a full time job but I’ve enjoyed side ventures as well. Prior to this year, I always freelanced on the side working with startups helping those scale. And then beginning of this year, I kind of replaced that side hustle with getting invested into AI. Product design is my main job — I’ve been doing it for eight years — and the artistic, creative part has always always been a hobby. Since I was a child, I always liked sketching, arts, music all of it. So when Midjourney came out in public beta [July 2022], it was kind of like a dream come true, right? You could suddenly visualize your thoughts and your creativity like never before. And that’s when I built my Twitter [X] platform around it, and I started growing that and then kind of always looked at how to combine different tools. In your in your role as a product designer over the past year at Volkswagen and then even prior to that, what tools were you using? I explore every hardware on the market, but I think you can really boil the toolset of a product designer down. I would say 95% of all creation comes from Figma. We spend our days creating screens, creating prototypes, designing pretty user interfaces and all of that. Of course, if you’re working with advanced animations, or you need certain graphics, you might go out into a different tool. But 95% also means most of the job currently doesn’t involve a lot of AI. I would say that Midjourney is entering the ring as a more and more attractive option now for brainstorming, ideation, or illustration, but I would still label that as playing around. What was the time frame and process for making the Genesis trailer? Did you make all the images beforehand not knowing about Gen2, or did you make some specifically for the trailer? The week prior to having the idea of the trailer, I posted three photo series on Twitter [X]. And those photos series were so to say already in that world. I already had those themes of robot versus humans in a dystopian world. I already had a prompt that went very much in that direction. So when I decided to do the trailer, I realized I already had prompts and a great fundament, which I then quickly tweaked. Sitting down on my computer, it took seven hours from the beginning to the end. All in one time frame? Or did you have to take a break for your day job and go back to it? What was the kind of burst of work that you were able to do? I’m a night owl, so I did the first five hours at night, at some point then the responsibility factor kicked in and I had to cut it off for the day job. But I would say I finished everything at night except for the last edits. It was just one or two scenes that were missing. Everything else was finalized. And then on the next day, after work quickly made those scenes published it all up and then posted it. So I would say it was like a five and a two hour session. And you primarily used Midjourney to create still images and then animated them in Runway? Or did you use any other tools, such as CapCut, or something else for the music? To go back a step, one of the goals of not only this trailer, but what I do with Midjourney, is to show the accessibility of it — of all the tools I use. And AI is a fascinating technology. For people who are not that confident in their creativity, it’s finely tuned for them to actually get to a result. They can draw, maybe they can visualize something, but then they can take their ideas further with these tools. This is a very important point for me personally. So with this trailer, I wanted to demo making the entry barrier as low as possible. I wanted to show people they only need a couple of tools, and beyond that, all you need is your imagination. So we have Midjourney and Runway, those are the two paid applications. And then to keep everything else low barrier, for music, I went to Pixabay and took something out of their commercially free pool of soundtracks. For the editing, I used CapCut because it’s free, and I did not have Adobe Premier installed on the machine I was working on. It was surprisingly good, and I was surprised how much you can do in in the graphics editor. It all just kind of came in perfectly together. How long do you think it would have taken you if you had not had artificial intelligence? Would it have even been possible for you to create the Genesis trailer, if you had to edit it and animate it manually? Without AI, would I have had the skills to do it today? No. Is it possible for someone else? Yes. Of course. But you would have a much higher effort, right? You would probably approach it differently. Because right now with AI, we work with a couple of restrictions. We’re working with images and we’re animating those images. If I would approach this from a non AI standpoint, I would certainly consider using engines gaming engines to get 3D stuff, where they’re using Blender and Cinema 4D, and building it completely differently from the ground up. That method results in higher quality and it has more control, but it also takes a considerably longer amount of time to do. And if I may add, a lot of those tools can also get very expensive with their licensing. So, I think this is a perfect example of just opening this field of creating original videos for a very low entry barrier. And these AI models will get better and we will see the quality go up, we will get more control in the future. But for right now, we got to live with the compromises. I mean even if we don’t pick on the quality, you don’t need to have a professional reason to do it. You can also just throw in some images and see what happens and laugh about it. Did you post it on X (Twitter) first, or where was it when you made it available initially? Well, I currently only post on Twitter [X], primarily. But after the reaction there, I also started my Instagram up and posted it on LinkedIn. LinkedIn was a risk as it’s for business, so I’m always a bit more reserved. I saw recently you were celebrating that you crossed 20,000 followers on X (Twitter). Was that all from the trailer? Before the trailer, I was around 17,000. Now, almost one week later, I’m sitting on 22,000. So it got me something around 4,000 or 5,000 new followers. It also got you coverage on CNN and also in Forbes , I’m sure some other media as well. What were the reactions that you were getting, and how were they making you feel as you saw those coming in? Of course, it was exciting and positive. I remember at some point, I got a comment at night, ‘Hey, I want to interview you for Forbes.’ It was an amazing moment to see a comment like that. I was like, ‘Oh, okay!’ I realized the trailer had gotten into a different bubble, then. I had been active on Twitter [X], and I knew it was receptive to AI and there was a nice community around it already. But at this moment, I saw we’ve gone beyond that, we’ve reached something else. Then I was at work the next day, and suddenly, I got a notification, ‘Hey, by the way you are being streamed on CNN!’. And then I was like, ‘Oh, shit, wow. This is really picking up steam!’ Then from there, of course, it’s really nice and happy and cool, but it also gets tiring, in the sense that all my notifications were blowing up, I was getting a ton of comments. And I wanted to do a good community management, so I spent a lot of time interacting with commenters and people who asked questions or left responses. And I saw you posted a walkthrough or step-by-step of how you made it? Yeah, and I had those things planned out. Once I knew I was going to do a trailer, I had already decided to post it on Twitter [X] and that I would share the making of it, because I always share my prompts and my process. I think that combination of the trailer plus the making of it very much boosted the algorithm to make it more popular. This trailer came out at a time when the actors in Hollywood are striking, the writers are striking. They’re concerned about AI. They’ve openly said, ‘You know, we don’t want AI to replace us or take our jobs.’ How do you respond to those concerns? Was there any feedback or concerns about this type of technology and your usage of it being a good illustration of how things are going, how we may need less human labor to create these kinds of cool movies and scenes? It’s a discussion that is happening in a lot of industries. I completely understand the concern and the importance of having these discussions. Personally, I always try to see the optimistic side of new technology. Rather than saying it will replace jobs, I much more see it as empowering somebody to do more. Because I think the true skill is still storytelling and creativity. And storytelling and creativity is done best when it’s performed by a human who we can relate to, bringing their emotions into it. Therefore, while I do understand the concerns, I really believe that it will help us become better in what we do instead of replacing us in what we do. I kind of find myself sharing that perspective as well. And I also saw some comments saying the level of the quality of the ‘Genesis’ trailer was not high enough quality to replace a Hollywood movie. But it sounds like that was never your goal, necessarily. What was your thinking when you made it: ‘I’m going to try to do this more as a proof-of-concept rather than achieve the highest quality?’ Absolutely. I primarily work on Midjourney, and we’ve reached a very good quality standard there. While I appreciate what it is and it is truly impressive how good the tools are, I wouldn’t say that the quality is there where we need it to be to actually do proper commercial projects with it. I don’t see it replacing an official trailer for Netflix anytime soon. What it was more for me was a tech demo to show what we can do today, how few resources you need to do something like that. The plot, the whole idea, it wasn’t generated by AI. It was only visuals. But it is a good test case, and the reception to the trailer showed that it can be used to test ideas. That’s something companies today could do. As a filmmaker, studio — Netflix, Amazon Prime, you name it — you wouldn’t have to film the movie or do high production costs to find out if a idea works with an audience, based on their reaction to an AI-generated trailer. It’s kind of similar to fashion companies using Midjourney to do mood boards or inspiration boards. It gives us a very low budget tool to visualize ideas. That’s where I kind of draw the line, but I’m sure there are artists and companies that will dare to go beyond and that will use it to do commercials or shows. Have you had interest from people in Hollywood, in the filmmaking business? Has anyone reached out to you to say ‘Hey, I want to learn from you! or ‘Hey, I want to collaborate or turn this into a full movie?’ What’s the response been from that field? There have certainly been requests coming in from filmmakers and other ventures that are interested in the technology. There are more people interested to collab or to find out more about it than there are people sending negative reactions. Do you plan to pursue those collaborations or turn this into a longer film? How are you thinking about what happens next with, in particular the Genesis trailer or that world? Look, taking the Genesis trailer on my own with the tools we have today, and making a full feature film probably won’t happen. But I will definitely explore the world and expand it. The rest kind of depends on what happens, right? If a Netflix or somebody would approach me and be like, ‘Hey, we liked the IP. Want to do something?’ Of course. I’m not saying that I’m not interested in making this something real. However, I know that at the pace we’re currently running, by the time we’re halfway done with that trailer, we already have new tech already available. So for now, I will definitely scale that world, tell more stories create new generations around it. For something like a feature film to happen, let’s see who approaches me with what ideas. How defined was was the story going into the trailer and going into those images? Did you write it down or it was just more loose in your head? And do you have names for these characters and items in the trailer? I didn’t write anything down for the reason. I’m a visual person. I have different mood boards where I have my pictures on it. In this case, I thought more in a visual sense. Before starting the trailer, I had an image pool of roughly 40 images that I had generated, which were enough to at least inspire me to start weaving them together into a story. Some ideas happened while making it. There’s the scene with the boy holding the glowing amulet, adding a little depth. After I posted the trailer, I have an image world of roughly 500 images to weave together into stories. Right. But again, it was a tech demo. I kind of created the story to optimize for that. I think that’s a different process than actually then building a whole story. Not saying it’s impossible. Do you have a strong tradition in sci-fi? Or what led you to this genre and these themes of man vs. machine? Well, I grew up with Star Wars and science fiction. Both of my parents are physicists, so that also played a large role in my life. More recently, topics like Silo from Apple or the upcoming Starfield game from Bethesda, the Cyberpunk 2077 game. Those are interesting topics for me and interesting experiences that I love delving into. So on the one hand, I am genuinely interested in that genre, on the second hand, I wanted to create a trailer in a theme where I know these AI models are capable of producing really good imagery. What do you plan to do next with AI? Creativity always has the opportunity to take me somewhere else, but I think there’s some foundational stuff that I’ll always pursue. I have a Twitter [X] platform and I have a strong emphasis on Midjourney. For the foreseeable future, I’ll be there teaching people how to use these tools, trying to empower people to work with their creativity. Runway now is a new tool in the box. I will be experimenting more with them in tandem and with Runway itself. The story will be expanded: new stories will be made, and always will. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,476
2,023
"What the viral AI-generated ‘Barbenheimer’ trailer says about generative AI hype | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/what-the-viral-ai-generated-barbenheimer-trailer-says-about-generative-ai-hype-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What the viral AI-generated ‘Barbenheimer’ trailer says about generative AI hype | The AI Beat Share on Facebook Share on X Share on LinkedIn Image by Curious Refuge/YouTube Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A new AI-generated movie trailer that splices together the wildly-hyped movies Barbie and Oppenheimer into a mashup — featuring a pink mushroom cloud — has gone viral. The trailer offers a spot-on sendup on the “Barbenheimer” hype that had moviegoers flocking to see both movies back-to-back, even though the two films couldn’t be more different — Oppenheimer is a sober biopic about the life and legacy of J. Robert Oppenheimer, father of the atomic bomb, while Barbie is a fizzy, feminist, live-action look at the famous doll. >>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands. << Powered by image generation AI Midjourney and movie generator Runway Gen2 and featuring AI-generated voices supposedly belonging to Margot Robbie and Matt Damon, the “Barbenheimer” crossover took just four days to make, according to the creator’s Reddit post , where he shared a link to his course on AI filmmaking. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But as a reporter covering AI’s cheerful, bullish, even fluffy side as well as its serious, sobering side, I can’t help but think about three things the AI-generated ‘Barbenheimer’ movie trailer says about the state of generative AI right now. 1. AI-generated entertainment is moving fast — perfect for today’s viral moments. So, it’s no surprise that as the Barbenheimer hype rocketed upward, any online content-maker could jump on board with their own quick-and-dirty AI-generated take to share across social platforms. A traditional ad agency couldn’t possibly move fast enough to pull off the same kind of “Barbenheimer” sendup to meet the moment, and the costs would be prohibitive enough that they likely wouldn’t even try. In an era when social media content is part of the zeitgeist more than ever, there’s no doubt that the speed of development of AI-generated entertainment is perfectly placed for today’s viral moments. Back in March, for example, a Reddit user shared an AI-generated video of Will Smith eating spaghetti on the r/StableDiffusion subreddit. It quickly spread on social media as well as the mainstream press, with one article saying the video “would haunt you for the rest of your life.” 2. The ‘Barbenheimer’ trailer comes as creatives strike and regulators play AI catchup. Hollywood has come nearly to a halt in recent weeks, with SAG-AFTRA actors and writers currently on strike and expressing particular concerns about the impact of gen AI on their industry and jobs. The “Barbenheimer” trailer is a perfect example: Who needs the pricey services of Margot Robbie and Matt Damon if you can come up with a serviceable AI copy? Why use the time-consuming work of artists or editors when you have the speedy output of Midjourney and Runway Gen 2? At the same time, AI-focused creatives who are excited by the possibilities of gen AI are going full-steam ahead — even as regulators and policy-makers sprint to catch up. The Senate will be schooled in AI this fall with an eye towards a foundation for developing regulations in 2024. Will that be too little, too late? 3. The current generative AI hype may just be a candy-colored wrapper around a more serious, unsettling reality. The AI-generated Barbenheimer trailer is, in my opinion, funny and adorable. But the idea that you could wrap one of history’s most horrifying periods — the development of the atomic bomb during World War II, which led to the death of hundreds of thousands at Hiroshima and Nagasaki in 1945 — in a candy-colored Barbie wrapper and a pink mushroom cloud is equal parts stunning and shocking. That’s gen AI in a nutshell — stunning and shocking, exciting and frightening, dazzling and appalling, sometimes all at once. But certainly, all stakeholders involved in AI development need to consider not just the sugary surface of what gen AI can do, but the deep, real issues that lay underneath. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,477
2,023
"Google Search caught indexing users' conversations with Bard AI | VentureBeat"
"https://venturebeat.com/ai/oops-google-search-caught-publicly-indexing-users-conversations-with-bard-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Oops! Google Search caught publicly indexing users’ conversations with Bard AI Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google Bard, the search giant’s conversational AI product, underwent a big update last week that earned mixed reviews. But this week, another, older Bard feature is coming under scrutiny: SEO consultant Gagan Ghotra observed that Google Search had begun to index shared Bard conversational links into its search results pages, potentially exposing information users meant to be kept contained or confidential. This means that if a person used Bard to ask it a question then shared the link with a designated third-party, say, their spouse, friend or business partner, the conversation accessible at that link could in turn be scraped by Google’s crawler and show up publicly, to the entire world, in its Search Results. On X (formerly Twitter), Ghotra posted a screenshot of evidence of several Bard conversations being indexed by Google Search. Haha ? Google started to index share conversation URLs of Bard ? don't share any personal info with Bard in conversation, it will get indexed and may be someone will arrive on that conversation from search and see your info ? Also Bard's conversation URLs are ranking as… pic.twitter.com/SKGXJD9KEJ Google Brain research scientist Peter J. Liu replied to Ghotra on X by noting that the Google Search indexing only occurred for those conversations that users had elected to click the share link on, not all Bard conversations, to which Ghotra patiently explained : “Most users wouldn’t be aware of the fact that shared conversation mean it would be indexed by Google and then show up in SERP, most people even I was thinking of it as a feature to share conversation with some friend or colleague & it being just visible to people who have conversation URL.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Ultimately, Google’s Search Liaison account on X , which provides “insights on how Google Search works,” wrote back to Ghotra to say “Bard allows people to share chats, if they choose. We also don’t intend for these shared chats to be indexed by Google Search. We’re working on blocking them from being indexed now.” A Google spokesperson sent an email to VentureBeat reiterating the comments of the Search Liaison account, and clarifying that shared Bard conversations are not available with the new Google Bard integrations with Gmail, Google Docs, and Google Drive. Even though Google says it is working on a fix, the mistake does not reflect well on Bard or Google’s consumer AI ambitions, especially in the face of intense competition from other rival AI chatbots like OpenAI’s popular ChatGPT. Hopefully Google’s new AI, Gemini , will offer a better and more private experience. Updated Weds. Sept. 27 at 4:27 pm ET with additional information provided by a Google spokesperson. A previous version of this article noted that there was a potential for private emails to be contained in public Bard search results. However, this mention has been excised as a result of the new information. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,478
2,021
"Build a data lakehouse to avoid a data swamp | VentureBeat"
"https://venturebeat.com/2021/07/15/build-a-data-lakehouse-to-avoid-a-data-swamp"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Build a data lakehouse to avoid a data swamp Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In my previous blog post , I ranted a little about database technologies and threw a few thoughts out there on what I think a better data system would be able to do. In this post, I am going to talk a bit about the concept of the data lakehouse. The term data lakehouse has been making the rounds in the data and analytics space for a couple of years. It describes an environment combining data structure and data management features of a data warehouse with the low-cost scalable storage of a data lake. Data lakes have advanced the separation of storage from compute, but do not solve problems of data management (what data is stored, where it is, etc). These challenges often turn a data lake into a data swamp. Said a different way, the data lakehouse maintains the cost and flexibility advantages of storing data in a lake while enabling schemas to be enforced for subsets of the data. Let’s dive a bit deeper into the lakehouse concept. We are looking at the lakehouse as an evolution of the data lake. And here are the features it adds on top: Data mutation – Data lakes are often built on top of Hadoop or AWS and both HDFS and S3 are immutable. This means that data cannot be corrected. With this also comes the problem of schema evolution. There are two approaches here: copy on write and merge on read – we’ll probably explore this some more in the next blog post. Transactions (ACID) / Concurrent read and write – One of the main features of relational databases that help us with read/write concurrency and therefore data integrity. Time-travel – This can feature is sort of provided through the transaction capability. The lakehouse keeps track of versions and therefore allows for going back in time on a data record. Data quality / Schema enforcement – Data quality has multiple facets, but mainly is about schema enforcement at ingest. For example, ingested data cannot contain any additional columns that are not present in the target table’s schema and the data types of the columns have to match. Storage format independence is important when we want to support different file formats from parquet to kudu to CSV or JSON. Support batch and streaming (real-time) – There are many challenges with streaming data. For example the problem of out-of order data, which is solved by the data lakehouse through watermarking. Other challenges are inherent in some of the storage layers, like parquet, which only works in batches. You have to commit your batch before you can read it. That’s where Kudu could come in to help as well, but more about that in the next blog post. Above: The evolution of the data lakehouse. Source: DataBricks If you are interested in a practitioners view of how increased data loads create challenges and how a large organization solved them, read about Uber’s journey that ended up in the development of Hudi , a data layer that supports most of the above features of a Lakehouse. We’ll talk more about Hudi in our next. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This story originally appeared on Raffy.ch. Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,479
2,021
"Using data ecosystems to gain an unbeatable competitive edge | VentureBeat"
"https://venturebeat.com/2021/08/06/using-data-ecosystems-to-gain-an-unbeatable-competitive-edge"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Using data ecosystems to gain an unbeatable competitive edge Share on Facebook Share on X Share on LinkedIn Presented by Capgemini Data collaboration is a massive growth opportunity. Organizations that make extensive use of external data enjoy a financial performance premium. The analysis from our recent Capgemini Research Institute report , Data-sharing masters , found that organizations that use more than seven data sources have nearly 14x the fixed-asset turnover and twice the market capitalization than organizations that do not use any external data for decision making. Moreover, there is a clear trend within organizations to accelerate data ecosystem engagements: 84% of organizations plan to launch new data ecosystem initiatives within the next three years. One in four (25%) organizations will invest upwards of $50m in data ecosystems in the next two to three years. Data-sharing masters, and how they outperform These data-sharing masters are in fact a group of organizations that significantly outperform others. They are set apart by the way they can leverage external data to enhance their own data-driven insights and decision making. They look beyond traditional sources of data and make use of data aggregators and data disruptors — such as hyperscalers — turning volumes of multi-source data into value, accelerated by their ability to share and collaborate. How do they manage this? The answer lies within the concept of data-sharing ecosystems, driving business outcomes across domains, industries, and value chains. This high-performing cohort is able to fully exploit the data collaboration business opportunity, go beyond expected or usual outcomes, and create new intelligent experiences, products, and services, or business models. If we look at Starbucks, besides customer purchase behavior analysis or enhanced personalization, using external data helped them to position themselves in niche areas of business. Another recent example, is the Future4Care initiative in which Capgemini was involved alongside Sanofi, Generali, and Orange. Together, they created a unique health-focused open-innovation ecosystem in Europe to stimulate the development of ehealth solutions and their go-to-market plan, for the benefit of both patients and health professionals. Adding 10% financial advantage It can be perceived as counter-intuitive to say that the more you share, the more you gain. Organizations involved in collaborative data ecosystems have the potential to drive an additional ten percentage points of financial advantage (including new revenue, higher productivity, and lower costs) in the next three years. And the more you invest, the more you sustain. Investments made by organizations vary across sectors and countries — 55% of telecom firms will be investing over $50 million, while 43% of banking-sector companies will do so. This includes investments made towards the acquisition of technology infrastructure, tools, talent and skills, and process re-engineering, among others. New, powerful approaches to data architecture If you’ve already modernized your data infrastructure to be cloud-centric, new approaches such as data mesh architectures can link distributed data lakes into a coherent data mesh, enabling consumer data to be protected and stored locally, but remain accessible globally. A data mesh connects the various data lakes that you need into a coherent infrastructure, one where all data is accessible as long as you have the right authority to access it. This doesn’t however mean there is “one great big virtual database;” the laws of physics mean that large, disparate data sets can’t just be joined together over huge distances with any degree of performance. This is where new approaches such as federated analytics come in, enabling you to deploy analytics to multiple remote data sets and have the analytics run there and then and collaborate on the results. When looking to provide external access to data, technologies such as homomorphic encryption enable you to provide secure access to external organizations without either their algorithms or your data being directly accessible. Differential privacy enables you to store the data in its raw form and provide high-quality access, but then adds noise to the results which protects privacy or IP. These new technologies, and others such as data marketplaces , will build upon a well-built and well-managed data infrastructure; they won’t, however, solve problems for organizations with unmanaged, disjointed data silos and misaligned data governance. So, the good news for those that have already undertaken the transformation of their current data landscape is that data ecosystems are an obvious evolution that requires new technologies, but not the re-engineering of solid foundations. However, for those that have not made that transformation, their business disadvantage will continue to grow. Compliance: A mandatory act Ethics continues to be a big topic for any organization working with data and we’ve all seen what happens when data is mistreated. Data masters understand that you must put the security and protection of your data assets above all else, or risk losing everything. As such, organizations investing in data ecosystems must first lay the foundation for trust, ethics, and compliance. In order to create a sustained competitive advantage, organizations have to build a clear roadmap from the beginning and answer some key questions at each stage, including why to engage in an ecosystem, which use case to tackle, which data can be shared, which data platform to use, and how to measure and monitor results. Above all, proactively addressing privacy, ethics, trust, and regulatory requirements across your data value chain is mandatory. Even if one ecosystem partner’s ethical guidelines and charter are well defined, it can be discussed and adapted as a shared set of policies for all parties involved. More so, by having this shared set of principles, organizations will have a bedrock of ecosystem partnership and trust. The era of data commerce Every organization should catch the wave now and ask themselves: “Do I want to keep my data in my four walls and take the risk of being left behind by my competitors — or do I take the opportunity to multiply my business by engaging in data sharing?” Whatever question you’re starting with, remember you can start small and learn from your on-going initiatives and progressively infuse the data-sharing culture within your teams. Learning by doing is again the best approach. Anne-Laure Thieullent is Artificial Intelligence and Analytics Group Offer Leader at Capgemini; Steve Jones is Chief Data Architect at Capgemini. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,480
2,023
"The good, the bad and the ugly: The world of data is changing | VentureBeat"
"https://venturebeat.com/enterprise-analytics/the-good-the-bad-and-the-ugly-the-world-of-data-is-changing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The good, the bad and the ugly: The world of data is changing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It has never been more exciting to work in the data world. Twenty years ago, data was relegated to a back-office function. In 2023, it lies at the heart of an organization’s competitive advantage. Digitalization has accelerated the need for IT leaders to pay special attention to their data, AI and analytics estate. Beyond needing to meet companies’ goals to create more compelling customer experiences and optimize operations, technology leaders will see data play an increasingly integral role in their career evolutions in new and interesting ways: According to Gartner, 25% of traditional large enterprise CIOs will be held accountable for digital business operational results — effectively becoming “COO by proxy” by next year. To succeed, technology and data leaders will need to take stock of the good, bad and ugly of the fast-evolving data space. The good: The data organization is now a value organization Here is the great news: 83% of companies report that they have appointed an executive to drive their data strategy. This represents an approximately 700% growth in 10 years (in 2012, only 12% of companies had Chief Data Officers (CDOs). 70% of these data leaders report to the company’s president, CEO, COO or CIO, allowing them to focus on what creates business value rather than activities that look like a cost center. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Additionally, technology executives are now structuring their teams to support the building of data products. According to Harvard Business Review, this can reduce the time it takes to implement data in new use cases by as much as 90% , decrease total ownership costs by up to 30% and reduce risk and data governance burden. Consequently, nearly 40% of data leaders report adopting a product management orientation to their data strategy, hiring data product managers to ensure that members of a data product team don’t just create algorithms, but instead collaborate in deploying entire business-critical applications. The bad: Data leaders are misunderstood While 92% of firms say they are seeing returns from data and AI investments, only 40% of companies said the CDO role is currently successful within their organization. Data chiefs sound pretty depressed, too: 62% report that they feel their role is poorly understood. They point to the typical issues of nascent organizations: overinflated expectations, unclear charters and difficulty to influence. This tends to frustrate everyone involved: To MIT , Fortune 1000 companies claim that only half of their data leaders can drive innovation using data, and 25% say they have no single point of accountability for data within their organizations. The result: Close to 75% of organizations have failed to create a data-driven organization. This indicates the clear need for data leaders to structure their organizations in a way that adds visible value to their employers — and quickly. The “ugly” What’s worse is that the average tenure of data leaders is less than 950 days. This compares to 7 years for the typical CEO and just over 4.5 years for the average CIO. When data leaders don’t get the time to create the structure their organization needs to win with data, everyone loses. Best practices are lost; the credibility of data engineers, analysts and scientists is affected; and business counterparts lose confidence in their leadership’s ability to build the data-driven organization they’ve committed to. Now what? There is hope: According to recent research — and despite the possible looming macroeconomic crisis — more than 2 in 3 data leaders ( 68% ) are looking to increase data management investments in 2023. On average, as our internal report shows, CDOs and CIOs have managed budgets of $90 million, with about 50% going toward personnel, 40% toward third-party software and 10% on corporate overhead expenses. It will be interesting to see how they decide to manage their investments this year. A recent report indicates that 52% of data leaders will focus on improving governance over data and processes first, culture and literacy second (46%), and third, gaining a holistic view of customers (45%). Data leaders also need to adapt their team’s structure as the industry shifts away from centralized data teams creating data pipelines and static dashboards towards a data mesh model. This is where data practitioners sit within the business domains and own their own data, developing dynamic data products and applications. The data mesh model brings data and analytics projects closer to the line of business, driving tangible ROI for business users. About 60% of survey respondents indicated that they plan to shift to a data mesh model in the next five years. There are at least four key roles CDOs should count on to build this new model: the data product manager, the program manager, and UX leader, and the data engineer. While some of these roles have existed for a while, the data product manager is a new and emerging career opportunity for aspiring data professionals. Three critical changes to make now From a technology standpoint, there will be three key changes that data leaders will need to make: Shifting from data warehouses to data lakehouse s to cost-effectively support the rising volume, variety, and velocity of data and reduce time-consuming and expensive data movement. Transitioning from siloed business intelligence dashboards to data products that work at enterprise-grade (globally available, highly-reliable and optimized for high data volume) and live up to consumer-grade scenarios (fast and responsive, optimized for high concurrency and work in real-time, all the time). Increasing focus on real-time and AI operationalization. Providing compelling customer experiences requires that an organization’s data and analytics infrastructure be optimized for decisions in real-time. Unfortunately, there is just too much data and too much input for data teams to provide the support they need. According to our recent CDO survey, 55% of organizations report managing 1,000 or more sources of data. Data fragmentation and complexity is the number one barrier to digital transformation. Leaders will have to find ways to build a center of competency to deploy intelligent services on top of their unified data platform. To summarize, data and analytics have become increasingly important to businesses and have attracted significant investments from enterprise leaders. To maximize ROI, however, enterprise data leaders should adapt to their organizational structure, the strategies they are pursuing and the types of technologies they are purchasing to drive measurable and tangible business value. Derek Zanutto is general partner at CapitalG. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,481
2,021
"Despite high demand for data leadership, CDO roles need improvement | VentureBeat"
"https://venturebeat.com/business/despite-high-demand-for-data-leadership-cdo-roles-need-improvement"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Despite high demand for data leadership, CDO roles need improvement Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At a time when data-driven companies consistently outperform their peers, strong data leadership can make the difference between success and failure. In fact, Chief Data Officers are now a permanent fixture in 65% of companies — a remarkable improvement on the 12% of companies where CDOs had a home in 2012, according to research by Exasol. Above: According to Collibra and Forrester Consulting, there are clear revenue benefits from going data-driven. But Exasol’s research finds CDOs are frustrated by C-suites blocking the change required. The survey uncovered high demand for strong data leadership , as well as some confusion and uncertainty about the CDO role. According to the study, roughly half (50%) of CDOs believe the value of their role is not yet recognized in the business world, while a similar number (46%) say that organizations’ expectations for the CDO role are too high and are misinformed. Its findings also supports previous research revealing that the average CDO tenure is shorter than most, finding that one in five (17%) of the CDOs surveyed had only stayed in their previous role for between one and two years, a nod to the high demand for these professionals. Overall, 64% agreed that the career path to CDO isn’t obvious. When it comes to nurturing data professionals in their journey to CDO, the report uncovered an opportunity for non-technical professionals to assume the role. Of those surveyed, only 3% were from arts/creative background but 59% agreed there was value in hiring applicants with more diverse backgrounds. Since data centric companies are 58% more likely to exceed revenue goal, there has never been a more exciting time or opportunity to support data experts, nurture data talent and expand our horizons in the search for data leaders of the future. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To better understand the CDO journey, Exasol surveyed 250 active CDOs from across the UK, the US and Germany to uncover the education, skill sets and the experiences that have helped current CDOs get to where they are today, the challenges and barriers they’ve faced along the way and what needs to change in order to support and promote more skilled people into the role of CDO. Read the full report by Exasol. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,482
2,023
"2023 data, ML and AI landscape: ChatGPT, generative AI and more | VentureBeat"
"https://venturebeat.com/ai/2023-data-ml-and-ai-landscape-chatgpt-generative-ai-and-more"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 2023 data, ML and AI landscape: ChatGPT, generative AI and more Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It’s been less than 18 months since we published our last MAD (Machine Learning, Artificial Intelligence and Data) landscape , and there have been dramatic developments in that time. When we left, the data world was booming in the wake of the gigantic Snowflake IPO with a whole ecosystem of startups organizing around it. Since then, of course, public markets crashed, a recessionary economy appeared and VC funding dried up. A whole generation of data/AI startups has had to adapt to a new reality. Meanwhile, the last few months have seen the unmistakable and exponential acceleration of generative AI , with arguably the formation of a new mini-bubble. Beyond technological progress, AI seems to have gone mainstream with a broad group of non-technical people around the world now getting to experience its power firsthand. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The rise of data, ML and AI has been one of the most fundamental trends in our generation. Its importance goes well beyond the purely technical, with a deep impact on society, politics, geopolitics and ethics. Yet it is a complicated, technical, rapidly evolving world that can be confusing even for practitioners in the space. There’s a jungle of acronyms, technologies, products and companies out there that’s hard to keep a track of, let alone master. The annual MAD landscape is an attempt at making sense of this vibrant space. Its general philosophy has been to open source work that we would do anyway and start a conversation with the community. So, here we are again in 2023. This is our ninth annual landscape and “state of the union” of the data and AI ecosystem. Here are the prior versions: 2012 , 2014 , 2016 , 2017 , 2018 , 2019 ( Part I and Part II ), 2020 and 2021. As the 2021 version was released late in the year, I skipped 2022 to focus on releasing a new version in the first quarter of 2023, which feels like a more natural publishing time for an annual effort. This annual state of the union post is organized into four parts: Part I: The Landscape ( PDF here , interactive version here ) Part II: Market trends: Financings, M&A and IPOs (or lack thereof) Part III: Data infrastructure trends Part IV: Trends in ML/AI MAD 2023, part I: The landscape After much research and effort, we are proud to present the 2023 version of the MAD landscape. When I say “we,” I mean a little group whose nights will be haunted for months to come by memories of moving tiny logos in and out of crowded little boxes on a PDF: Katie Mills , Kevin Zhang and Paolo Campos. Immense thanks to them. And yes, I meant it when I told them at the onset, “oh, it’s a light project, maybe a day or two, it’ll be fun, please sign here.” So, here it is (cue in drum roll, smoke machine): In addition, this year, for the first time, we’re jumping head first into what the youngsters call the “World Wide Web,” with a fully interactive version of the MAD Landscape that should make it fun to explore the various categories in both “landscape” and “card” format. General approach We’ve made the decision to keep both data infrastructure and ML/AI on the same landscape. One could argue that those two worlds are increasingly distinct. However, we continue to believe that there is an essential symbiotic relationship between those areas. Data feeds ML/AI models. The distinction between a data engineer and a machine learning engineer is often pretty fluid. Enterprises need to have a solid data infrastructure in place in order before properly leveraging ML/AI. The landscape is built more or less on the same structure as every annual landscape since our first version in 2012. The loose logic is to follow the flow of data from left to right – from storing and processing to analyzing to feeding ML/AI models and building user-facing, AI-driven or data-driven applications. We continue to have a separate “open source” section. It’s always been a bit of an awkward organization as we effectively separate commercial companies from the open source project they’re often the main sponsor of. But equally, we want to capture the reality that for one open source project (for example, Kafka), you have many commercial companies and/or distributions (for Kafka – Confluent, Amazon, Aiven, etc.). Also, some open-source projects appearing in the box are not fully commercial companies yet. The vast majority of the organizations appearing on the MAD landscape are unique companies with a very large number of VC-backed startups. A number of others are products (such as products offered by cloud vendors) or open source projects. Company selection This year, we have a total of 1,416 logos appearing on the landscape. For comparison, there were 139 in our first version in 2012. Each year we say we can’t possibly fit more companies on the landscape, and each year, we need to. This comes with the territory of covering one of the most explosive areas of technology. This year, we’ve had to take a more editorial, opinionated approach to deciding which companies make it to the landscape. In prior years, we tended to give disproportionate representation to growth-stage companies based on funding stage (typically Series B-C or later) and ARR (when available) in addition to all the large incumbents. This year, particularly given the explosion of brand new areas like generative AI, where most companies are 1 or 2 years old, we’ve made the editorial decision to feature many more very young startups on the landscape. Disclaimers: We’re VCs, so we have a bias towards startups, although hopefully, we’ve done a good job covering larger companies, cloud vendor offerings, open source and the occasional bootstrapped companies. We’re based in the US, so we probably over-emphasize US startups. We do have strong representation of European and Israeli startups on the MAD landscape. However, while we have a few Chinese companies, we probably under-emphasize the Asian market as well as Latin America and Africa (which just had an impressive data/AI startup success with the acquisition of Tunisia-born Instadeep by BioNTech for $650M) Categorization One of the harder parts of the process is categorization, in particular, what to do when a company’s product offering straddles two or more areas. It’s becoming a more salient issue every year as many startups progressively expand their offering, a trend we discuss in “Part III – Data Infrastructure.” It would be equally untenable to put every startup in multiple boxes in this already overcrowded landscape. Therefore, our general approach has been to categorize a company based on its core offering, or what it’s mostly known for. As a result, startups generally appear in only one box, even if they do more than just one thing. We make exceptions for the cloud hyperscalers (many AWS, Azure and GCP products across the various boxes), as well as some public companies (e.g., Datadog) or very large private companies (e.g., Databricks). What’s new this year Main changes in “Infrastructure” We (finally) killed the Hadoop box to reflect the gradual disappearance of the OG Big Data technology – the end of an era! We decided to keep it one last time in the MAD 2021 landscape to reflect the existing footprint. Hadoop is actually not dead, and parts of the Hadoop ecosystem are still being actively used. But it has declined enough that we decided to merge the various vendors and products supporting Hadoop into Data Lakes (and kept Hadoop and other related projects in our open source category). Speaking of data lakes, we rebranded that box to “Data Lakes/Lakehouses” to reflect the lakehouse trend (which we had discussed in the 2021 MAD landscape) In the ever-evolving world of databases, we created three new subcategories: GPU-accelerated Databases: Used for streaming data and real-time machine learning. Vector Databases: Used for unstructured data to power AI applications, see What is a Vector Database? Database Abstraction: A somewhat amorphous term meant to capture the emergence of a new group of serverless databases that abstract away a lot of the complexity involved in managing and configuring a database. For more, here’s a good overview: 2023 State of Databases for Serverless & Edge. We considered adding an “Embedded Database” category with DuckDB for OLAP, KuzuDB for Graph, SQLite for RDBMS and Chroma for search but had to make hard choices given limited real estate – maybe next year. We added a “Data Orchestration” box to reflect the rise of several commercial vendors in that space (we already had a “Data Orchestration” box in “Open Source” in MAD 2021). We merged two subcategories, “Data observability” and “Data quality,” into just one box to reflect the fact that companies in the space, while sometimes coming from different angles, are increasingly overlapping – a signal that the category may be ripe for consolidation. We created a new “Fully Managed” data infrastructure subcategory. This reflects the emergence of startups that abstract away the complexity of stitching together a chain of data products (see our thoughts on the Modern Data Stack in Part III), saving their customers time, not just on the technical front, but also on contract negotiation, payments, etc. Main changes in “Analytics” For now, we killed the “Metrics Store” subcategory we had created in the 2021 MAD landscape. The idea was that there was a missing piece in the modern data stack. The need for the functionality certainly remains, but it’s unclear whether there’s enough there for a separate subcategory. Early entrants in the space rapidly evolved: Supergrain pivoted, Trace built a whole layer of analytics on top of its metrics store, and Transform was recently acquired by dbt Labs. We created a “Customer Data Platform” box, as this subcategory, long in the making, has been heating up. At the risk of being “very 2022”, we created a “Crypto/web3 Analytics” box. We continue to believe there are opportunities to build important companies in the space. Main changes in “Machine Learning/Artificial Intelligence” In our 2021 MAD landscape, we had broken down “MLOps” into multiple subcategories: “Model Building,” “Feature Stores” and “Deployment and Production.” In this year’s MAD, we’ve merged everything back into one big MLOps box. This reflects the reality that many vendors’ offerings in the space are now significantly overlapping – another category that’s ripe for consolidation. We almost created a new “LLMOps” category next to MLOps to reflect the emergence of a new group of startups focused on the specific infrastructure needs for large language models. But the number of companies there (at least that we are aware of) is still too small and those companies literally just got started. We renamed “Horizontal AI” to “Horizontal AI/AGI” to reflect the emergence of a whole new group of research-oriented outfits, many of which openly state artificial general intelligence as their ultimate goal. We created a “Closed Source Models” box to reflect the unmistakable explosion of new models over the last year, especially in the field of generative AI. We’ve also added a new box in “Open Source” to capture the open source models. We added an “Edge AI” category – not a new topic, but there seems to be acceleration in the space. Main changes in “Applications” We created a new “Applications/Horizontal” category, with subcategories such as code, text, image, video, etc. The new box captures the explosion of new generative AI startups over the last few months. Of course, many of those companies are thin layers on top of GPT and may or may not be around in the next few years, but we believe it’s a fundamentally new and important category and wanted to reflect it on the 2023 MAD landscape. Note that there are a few generative AI startups mentioned in “Applications/Enterprise” as well. In order to make room for this new category: We deleted the “Security” box in “Applications/Enterprise.” We made this editorial decision because, at this point, just about every one of the thousands of security startups out there uses ML/AI, and we could devote an entire landscape to them. We trimmed down the “Applications/Industry” box. In particular, as many larger companies in spaces like finance, health or industrial have built some level of ML/AI into their product offering, we’ve made the editorial decision to focus mostly on “AI-first” companies in those areas. Other noteworthy changes We added a new ESG data subcategory to “Data Sources & APIs” at the bottom to reflect its growing (if sometimes controversial) importance. We considerably expanded our “Data Services” category and rebranded it “Data & AI Consulting” to reflect the growing importance of consulting services to help customers facing a complex ecosystem, as well as the fact that some pure-play consulting shops are starting to reach early scale. MAD 2023, Part II: Financings, M&A and IPOs “It’s been crazy out there. Venture capital has been deployed at an unprecedented pace, surging 157% year-on-year globally […]. Ever higher valuations led to the creation of 136 newly-minted unicorns […] and the IPO window has been wide open, with public financings up +687%” Well, that was…last year. Or, more precisely, 15 months ago, in the MAD 2021 post, written pretty much at the top of the market, in September 2021. Since then, of course, the long-anticipated market turn did occur, driven by geopolitical shocks and rising inflation. Central banks started increasing interest rates, which sucked the air out of an entire world of over-inflated assets, from speculative crypto to tech stocks. Public markets tanked, the IPO window shut down, and bit by bit, the malaise trickled down to private markets, first at the growth stage, then progressively to the venture and seed markets. We’ll talk about this new 2023 reality in the following order: Data/AI companies in the new recessionary era Frozen financing markets Generative AI, a new financing bubble? M&A MAD companies facing recession It’s been rough for everyone out there, and Data/AI companies certainly haven’t been immune. Capital has gone from abundant and cheap to scarce and expensive. Companies of all sizes in the MAD landscape have had to dramatically shift focus from growth at all costs to tight control over their expenses. Layoff announcements have become a sad part of our daily reality. Looking at popular tracker Layoffs.fyi , many of the companies appearing on the 2023 MAD landscape have had to do layoffs, including, for a few recent examples: Snowplow, Splunk, MariaDB, Confluent, Prisma, Mapbox, Informatica, Pecan AI, Scale AI, Astronomer*, Elastic, UIPath, InfluxData, Domino Data Lab, Collibra, Fivetran, Graphcore, Mode, DataRobot, and many more (to see the full list, filter by industry, using “data”). For a while in 2022, we were in a moment of suspended reality – public markets were tanking, but underlying company performance was holding strong, with many continuing to grow fast and beating their plans. Over the last few months, however, overall market demand for software products has started to adjust to the new reality. The recessionary environment has been enterprise-led so far, with consumer demand holding surprisingly strong. This has not helped MAD companies much, as the overwhelming majority of companies on the landscape are B2B vendors. First to cut spending were scale-ups and other tech companies, which resulted in many Q3 and Q4 sales misses at the MAD startups that target those customers. Now, Global 2000 customers have adjusted their 2023 budgets as well. We are now in a new normal, with a vocabulary that will echo recessions past for some and will be a whole new muscle to build for younger folks: responsible growth, cost control, CFO oversight, long sales cycles, pilots, ROI. This is also the big return of corporate governance: As the tide recedes, many issues that were hidden or deprioritized suddenly emerge in full force. Everyone is forced to pay a lot more attention. VCs on boards are less busy chasing the next shiny object and more focused on protecting their existing portfolio. CEOs are less constantly courted by obsequious potential next-round investors and discover the sheer difficulty of running a startup when the next round of capital at a much higher valuation does not magically materialize every 6 to 12 months. The MAD world certainly has not been immune to the excesses of the bull market. As an example, scandal emerged at DataRobot after it was revealed that five executives were allowed to sell $32M in stock as secondaries, forcing the CEO to resign (the company was also sued for discrimination). The silver lining for MAD startups is that spending on data, ML and AI still remains high on the CIO’s priority list. This McKinsey study from December 2022 indicates that 63% percent of respondents say they expect their organizations’ investment in AI to increase over the next three years. Frozen financing markets In 2022, both public and private markets effectively shut down and 2023 is looking to be a tough year. The market will separate strong, durable data/AI companies with sustained growth and favorable cash flow dynamics from companies that have mostly been buoyed by capital, hungry for returns in a more speculative environment. Public markets As a “hot” category of software, public MAD companies were particularly impacted. We are overdue for an update to our MAD Public Company Index , but overall, public data & infrastructure companies (the closest proxy to our MAD companies) saw a 51% drawdown compared to the 19% decline for S&P 500 in 2022. Many of these companies traded at significant premiums in 2021 in a low-interest environment. They could very well be oversold at current prices. Snowflake was an $89.67B market cap company at the time of our last MAD and went on to reach a high of $122.94B in November 2021. It is currently trading at a $49.55B market cap at the time of writing. Palantir was a $49.49B market cap company at the time of our last MAD but traded at $69.89 at its peak in January 2021. It is currently trading at a $19.14B market cap at the time of writing. Datadog was a $42.60B market cap company at the time of our last MAD and went on to reach a high of $61.33B in November 2021. It is currently trading at a $25.40B market cap at the time of writing. MongoDB was a $30.68B market company at the time of our last MAD and went on to reach a high of $39.03B in November 2021. It is currently trading at a $14.77B market cap at the time of writing. The late 2020 and 2021 IPO cohorts fared even worse: UiPath (2021 IPO) reached a peak of $40.53B in May 2021 and currently trades at $9.04B at the time of writing. Confluent (2021 IPO) reached a peak of $24.37B in November 2021 and currently trades at $7.94B at the time of writing. C3 AI (2021 IPO) reached a peak of $14.05B in February 2021 and currently trades at $2.76B at the time of writing. Couchbase (2021 IPO) reached a peak of $2.18B in May 2021 and currently trades at $0.74B at the time of writing. As to the small group of “deep tech” companies from our 2021 MAD landscape that went public, it was simply decimated. As an example, within autonomous trucking, companies like TuSimple (which did a traditional IPO), Embark Technologies (SPAC), and Aurora Innovation (SPAC) are all trading near (or even below!) equity raised in the private markets. Given market conditions, the IPO window has been shut, with little visibility on when it might re-open. Overall IPO proceeds have fallen 94% from 2021, while IPO volume sank 78% in 2022. Interestingly, two of the very rare 2022 IPOs were MAD companies: Mobileye , a world leader in self-driving technologies, went public in October 2022 at a $16.7B valuation. It has more than doubled its valuation since and currently trades at market cap of $36.17B. Intel had acquired the Israeli company for over $15B in 2018 and had originally hoped for a $50B valuation so that IPO was considered disappointing at the time. However, because it went out at the right price, Mobileye is turning out to be a rare bright spot in an otherwise very bleak IPO landscape. MariaDB , an open source relational database, went public in December 2022 via SPAC. It saw its stock drop 40% on its first day of trading and now trades at a market cap of $194M (less than the total of what it had raised in private markets before going public). It’s unclear when the IPO window may open again. There is certainly tremendous pent-up demand from a number of unicorn-type private companies and their investors, but the broader financial markets will need to gain clarity around macro conditions (interest rates, inflation, geopolitical considerations) first. Conventional wisdom is that when IPOs become a possibility again, the biggest private companies will need to go out first to open the market. Databricks is certainly one such candidate for the broad tech market and will be even more impactful for the MAD category. Like many private companies, Databricks raised at high valuations, most recently at $38B in its Series H in August 2021 – a high bar given current multiples, even though its ARR is now well over $1B. While the company is reportedly beefing up its systems and processes ahead of a potential listing, CEO Ali Ghodsi expressed in numerous occasions feeling no particular urgency in going public. Other aspiring IPO candidates on our Emerging MAD Index (also due for an update but still directionally correct) will probably have to wait for their turn. Private markets In private markets, this was the year of the Great VC Pullback. Funding dramatically slowed down. In 2022, startups raised an aggregate of ~$238B, a drop of 31% compared to 2021. The growth market, in particular, effectively died. Private secondary brokers experienced a burst of activity as many shareholders tried to exit their position in startups perceived as overvalued, including many companies from the MAD landscape (ThoughtSpot, Databricks, Sourcegraph, Airtable, D2iQ, Chainalysis, H20.AI, Scale AI, Dataminr, etc.). The VC pullback came with a series of market changes that may leave companies orphaned at the time they need the most support. Crossover funds, which had a particularly strong appetite for data/AI startups, have largely exited private markets, focusing on cheaper buying opportunities in public markets. Within VC firms, lots of GPs have or will be moving on, and some solo GPs may not be able (or willing) to raise another fund. At the time of writing, the venture market is still at a state of standstill. Many data/AI startups, perhaps even more so than their peers, raised at aggressive valuations in the hot market of the last couple of years. For data infrastructure startups with strong founders, it was pretty common to raise a $20M Series A on $80M-$100M pre-money valuation, which often meant a multiple on next year ARR of 100x or more. The problem, of course, is that the very best public companies, such as Snowflake, Cloudflare or Datadog, trade at 12x to 18x of next year’s revenues (those numbers are up, reflecting a recent rally at the time of writing). Startups, therefore, have a tremendous amount of growing to do to get anywhere near their most recent valuations or face significant down rounds (or worse, no round at all). Unfortunately, this growth needs to happen in the context of slower customer demand. Many startups right now are sitting on solid amounts of cash and don’t have to face their moment of reckoning by going back to the financing market just yet, but that time will inevitably happen unless they become cash-flow positive. Generative AI: A new financing bubble? Generative AI (see Part IV) has been the one very obvious exception to the general market doom-and-gloom, a bright light not just in the data/AI world, but in the entire tech landscape. Particularly as the fortunes of web3/crypto started to turn, AI became the hot new thing once again – not the first time those two areas have traded places in the hype cycle: Because generative AI is perceived as a potential “once-every-15-years” type of platform shift in the technology industry, VCs aggressively started pouring money into the space, particularly into founders that came out of research labs like OpenAI, Deepmind, Google Brain, and Facebook AI Research, with several AGI-type companies raising $100M+ in their first rounds of financing. Generative AI is showing some signs of being a mini-bubble already. As there are comparatively few “assets” available on the market relative to investor interest, valuation is often no object when it comes to winning the deal. The market is showing signs of rapidly adjusting supply to demand, however, as countless generative AI startups are created all of a sudden. VCs switching from crypto to generative AI pic.twitter.com/Z2lM8m50iI Noteworthy financings in generative AI OpenAI received a $10B investment from Microsoft in January 2023; Runway ML , an AI-powered video editing platform, raised a $50M Series C at a $500M valuation in December 2022; ImagenAI , an AI-powered photo editing and post-production automation startup, raised $30 million in December 2022; Descript , and AI-powered media editing app, raised $50M in its Series C in November 2022; Mem, an AI-powered note-taking app, raised $23.5M in its Series A in November 2022; Jasper AI , an AI-powered copywriter, raised $125M at a $1.5B valuation in October 2022; Stability AI , the generative AI company behind Stable Diffusion , raised $101M at $1B valuation in October 2022; You , an AI-powered search engine, raised $25M in its Series A financings; Hugging Face , a repository of open source machine learning models, raised $100M in its Series C at a $1B valuation in May 2022; Inflection AI , AGI startup, raised $225M in its first round of equity financing in May 2022; Anthropic , an AI research firm, raised $580M in its Series B (investors including from SBF and Caroline Ellison!) in April 2022; Cohere , an NLP platform, raised $125M in its Series B in February 2022. Expect a lot more of this. Cohere is reportedly in talks to raise hundreds of millions of dollars in a funding round that could value the startup at more than $6 billion M&A 2022 was a difficult year for acquisitions, punctuated by the failed $40B acquisition of ARM by Nvidia (which would have affected the competitive landscape of everything from mobile to AI in data centers). The drawdown in the public markets, especially tech stocks, made acquisitions with any stock component more expensive compared to 2021. Late-stage startups with strong balance sheets, on the other hand, generally favored reducing burn instead of making splashy acquisitions. Overall, startup exit values fell by over 90% year over year to $71.4B from $753.2B in 2021. That said, there were several large acquisitions and a number of (presumably) small tuck-in acquisitions, a harbinger of things to come in 2023, as we expect many more of those in the year ahead (we discuss consolidation in Part III on Data Infrastructure). Private equity firms may play an outsized role in this new environment, whether on the buy or sell side. Qlik just announced its intent to acquire Talend. This is notable because both companies are owned by Thoma Bravo, who presumably played marriage broker. Progress also just completed its acquisition of MarkLogic , a NoSQL database provider MarkLogic for $355M. MarkLogic, rumored to have revenues “around $100M”, was owned by private equity firm Vector Capital Management. MAD 2023, Part III: Data infrastructure back to reality In the hyper-frothy environment of 2019-2021, the world of data infrastructure ( nee Big Data) was one of the hottest areas for both founders and VCs. It was dizzying and fun at the same time, and perhaps a little weird to see so much market enthusiasm for products and companies that are ultimately very technical in nature. Regardless, as the market has cooled down, that moment is over. While good companies will continue to be created in any market cycle, and “hot” market segments will continue to pop up, the bar has certainly escalated dramatically in terms of differentiation and quality for any new data infrastructure startup to get real interest from potential customers and investors. Here is our take on some of the key trends in the data infra market in 2023. The first couple trends are higher level and should be interesting to everyone, the others are more in the weeds: Brace for impact: bundling and consolidation The Modern Data Stack under pressure The end of ETL? Reverse ETL vs CDP Data mesh, products, contracts: dealing with organizational complexity [Convergence] Bonus: What impact will AI have on data and analytics? Brace for impact: Bundling and consolidation If there’s one thing the MAD landscape makes obvious year after year, it’s that the data/AI market is incredibly crowded. In recent years, the data infrastructure market was very much in “let a thousand flowers bloom” mode. The Snowflake IPO (the biggest software IPO ever) acted as a catalyst for this entire ecosystem. Founders started literally hundreds of companies, and VCs happily funded them (again, and again, and again) within a few months. New categories (e.g., reverse ETL, metrics stores, data observability) appeared and became immediately crowded with a number of hopefuls. On the customer side, discerning buyers of technology, often found in scale-ups or public tech companies, were willing to experiment and try the new thing with little oversight from the CFO office. This resulted in many tools being tried and purchased in parallel. Now, the music has stopped. On the customer side, buyers of technology are under i ncreasing budget pressure and CFO control. While data/AI will remain a priority for many, even during a recessionary period, they have too many tools as it is, and they’re being asked to do more with less. They also have fewer resources to engineer anything. They’re less likely to be experimental or work with immature tools and unproven startups. They’re more likely to pick established vendors that offer tightly integrated suites of products, stuff that “just works.” This leaves the market with too many data infrastructure companies doing too many overlapping things. In particular, there’s an ocean of “single-feature” data infrastructure (or MLOps) startups (perhaps too harsh a term, as they’re just at an early stage) that are going to struggle to meet this new bar. Those companies are typically young (1-4 years in existence), and due to limited time on earth, their product is still largely a single feature, although every company hopes to grow into a platform; they have some good customers but not a resounding product-market-fit just yet. This class of companies has an uphill battle in front of them and a tremendous amount of growing to do in a context where buyers are going to be weary and VC cash is scarce. Expect the beginning of a Darwinian period ahead. The best (or luckiest, or best funded) of those companies will find a way to grow, expand from a single feature to a platform (say, from data quality to a full data observability platform), and deepen their customer relationships. Others will be part of an inevitable wave of consolidation, either as a tuck-in acquisition for a bigger platform or as a startup-on-startup private combination. Those transactions will be small, and none of them will produce the kind of returns founders and investors were hoping for. (we are not ruling out the possibility of multi-billion dollar mega deals in the next 12-18 months, but those will most likely require the acquirers to see the light at the end of the tunnel in terms of the recessionary market). Still, consolidation will be better than simply going out of business. Bankruptcy, an inevitable part of the startup world, will be much more common than in the last few years, as companies cannot raise their next round or find a home. At the top of the market, the larger players have already been in full product expansion mode. It’s been the cloud hyperscaler’s strategy all along to keep adding products to their platform. Now Snowflake and Databricks, the rivals in a titanic shock to become the default platform for all things data and AI (see the 2021 MAD landscape), are doing the same. Databricks seems to be on a mission to release a product in just about every box of the MAD landscape. This product expansion has been done almost entirely organically, with a very small number of tuck-in acquisitions along the way – Datajoy and Cortex Labs in 2022. Snowflake has also been releasing features at a rapid pace. It has become more acquisitive as well. It announced three acquisitions in the first couple of months of 2023 already. Confluent, the public company built on top of the open-source streaming project Kafka, is also making interesting moves by expanding to Flink, a very popular streaming processing engine. It just acquired Immerok. This was a quick acquisition, as Immerok was founded in May 2022 by a team of Flink committees and PMC members, funded with $17M in October and acquired in January 2023. Some slightly smaller but still unicorn-type startups are also starting to expand aggressively, starting to encroach on other’s territories in an attempt to grow into a broader platform. As an example, transformation leader dbt Labs first announced a product expansion into the adjacent semantic layer area in October 2022. Then, it acquired an emerging player in the space, Transform (dbt’s blog post provides a nice overview of the semantic layer and metrics store concept) in February 2023. Some categories in data infrastructure feel particularly ripe for consolidation of some sort – the MAD landscape provides a good visual aid for this, as the potential for consolidation maps pretty closely with the fullest boxes: ETL and reverse ETL : Over the last three or four years, the market has funded a good number of ETL startups (to move data into the warehouse), as well as a separate group of reverse ETL startups (to move data out of the warehouse). It is unclear how many startups the market can sustain in either category. Reverse ETL companies are under pressure from different angles (see below), and it is possible that both categories may end up merging. ETL company Airbyte acquired Reverse ETL startup Grouparoo. Several companies like Hevo Data position as end-to-end pipelines, delivering both ETL and reverse ETL (with some transformation too), as does data syncing specialist Segment. Could ETL market leader FIvetran acquire or (less likely) merge with one of its Reverse ETL partners like Census or Hightouch ? Data quality and observability : The market has seen a glut of companies that all want to be the “Datadog of data.” What Datadog does for software (ensure reliability and minimize application downtime), those companies want to do for data – detect, analyze and fix all issues with respect to data pipelines. Those companies come at the problem from different angles: Some do data quality (declaratively or through machine learning), others do data lineage, and others do data reliability. Data orchestration companies also play in the space. Many of those companies have excellent founders, are backed by premier VCs and have built quality products. However, they are all converging in the same direction in a context where demand for data observability is still comparatively nascent. Data catalogs : As data becomes more complex and widespread within the enterprise, there is a need for an organized inventory of all data assets. Enter data catalogs, which ideally also provide search, discovery and data management capabilities. While there is a clear need for the functionality, there are also many players in the category, with smart founders and strong VC backing, and here as well, it is unclear how many the market can sustain. It is also unclear whether data catalogs can be separate entities outside of broader data governance platforms long term. MLOps : While MLOps sits in the ML/AI section of the MAD landscape, it is also infrastructure and it is likely to experience some of the same circumstances as the above. Like the other categories, MLOps plays an essential role in the overall stack, and it is propelled by the rising importance of ML/AI in the enterprise. However, there is a large number of companies in the category, most of which are well-funded but early on the revenue front. They started from different places (model building, feature stores, deployment, transparency, etc.), but as they try to go from single feature to a broader platform, they are on a collision course with each other. Also, many of the current MLOps companies have primarily focused on selling to scale-ups and tech companies. As they go upmarket, they may start bumping into the enterprise AI platforms that have been selling to Global 2000 companies for a while, like Dataiku, Datarobot, H2O, as well as the cloud hyperscalers. The modern data stack under pressure A hallmark of the last few years has been the rise of the “Modern Data Stack” (MDS). Part architecture, part de facto marketing alliance amongst vendors, the MDS is a series of modern, cloud-based tools to collect, store, transform and analyze data. At the center of it, there’s the cloud data warehouse (Snowflake, etc.). Before the data warehouse, there are various tools (Fivetran, Matillion, Airbyte, Meltano, etc.) to extract data from their original sources and dump it into the data warehouse. At the warehouse level, there are other tools to transform data, the “T” in what used to be known as ETL (extract transform load) and has been reversed to ELT (here, dbt Labs reigns largely supreme). After the data warehouse, there are other tools to analyze the data (that’s the world of BI, for business intelligence) or extract the transformed data and plug it back into SaaS applications (a process known as “reverse ETL”). Up until recently, the MDS was a fun but little world. As Snowflake’s fortunes kept rising, so did the entire ecosystem around it. Now, the world has changed. As cost control becomes paramount, some may question the approach that is at the heart of the modern data stack: Dump all your data somewhere (a data lake, lakehouse or warehouse) and figure out what to do with it later, which turns out to be expensive and not always that useful. Now the MDS is under pressure. In a world of cost control and rationalization, it’s almost too obvious a target. It’s complex (as customers need to stitch everything together and deal with multiple vendors). It’s expensive (as every vendor wants their margin and also because you need an in-house team of data engineers to make it all work). And it’s arguably elitist (as those are the most bleeding-edge, best-in-breed tools, requiring customers to be sophisticated both technically and in terms of use cases), serving the needs of the few. What happens when MDS companies stop being friendly and start competing with one another for smaller customer budgets? As an aside, the complexity of the MDS has given rise to a new category of vendors that “package” various products under one fully managed platform (as mentioned above, a new box in the 2023 MAD featuring companies like Y42 or Mozart Data). The underlying vendors are some of the usual suspects in MDS, but most of those platforms abstract away both the business complexity of managing several vendors and the technical complexity of stitching together the various solutions. The end of ETL? As a twist on the above, there’s a parallel discussion in data circles as to whether ETL should even be part of data infrastructure going forward. ETL, even with modern tools, is a painful, expensive and time-consuming part of data engineering. At its Re:Invent conference last November, Amazon asked, “What if we could eliminate ETL entirely? That would be a world we would all love. This is our vision, what we’re calling a zero ETL future. And in this future, data integration is no longer a manual effort”, announcing support for a “zero-ETL” solution that tightly integrates Amazon Aurora with Amazon Redshift. Under that integration, within seconds of transactional data being written into Aurora, the data is available in Amazon Redshift. The benefits of an integration like this are obvious: No need to build and maintain complex data pipelines, no duplicate data storage (which can be expensive), and always up-to-date. Now, an integration between two Amazon databases in itself is not enough to lead to the end of ETL alone, and there are reasons to be skeptical that a Zero ETL future would happen soon. But then again, Salesforce and Snowflake also announced a partnership to share customer data in real-time across systems without moving or copying data, which falls under the same general logic. Before that, Stripe had launched a data pipeline to help users sync payment data with Redshift and Snowflake. The concept of change data capture is not new, but it’s gaining steam. Google already supports change data capture in BigQuery. Azure Synapse does the same by pre-integrating Azure Data Factory. There is a rising generation of startups in the space like Estuary* and Upsolver. It seems that we’re heading towards a hybrid future where analytic platforms will blend in streaming, integration with data flow pipelines and Kafka PubSub feeds. Reverse ETL vs. CDP Another somewhat-in-the-weeds but fun-to-watch part of the landscape has been the tension between Reverse ETL (again, the process of taking data out of the warehouse and putting it back into SaaS and other applications) and Customer Data Platforms (products that aggregate customer data from multiple sources, run analytics on them like segmentation, and enable actions like marketing campaigns). Over the last year or so, the two categories started converging into one another. Reverse ETL companies presumably learned that just being a pipeline on top of a data warehouse wasn’t commanding enough wallet share from customers and that they needed to go further in providing value around customer data. Many Reverse ETL vendors now position themselves as CDP from a marketing standpoint. Meanwhile, CDP vendors learned that being another repository where customers needed to copy massive amounts of data was at odds with the general trend of centralization of data around the data warehouse (or lake or lakehouse). Therefore, CDP vendors started offering integration with the main data warehouse and lakehouse providers. See, for example, ActionIQ* launching HybridCompute , mParticle launching Warehouse Sync , or Segment introducing Reverse ETL capabilities. As they beef up their own reverse ETL capabilities, CDP companies are now starting to sell to a more technical audience of CIO and analytics teams, in addition to their historical buyers (CMOs). Where does this leave Reverse ETL companies? One way they could evolve is to become more deeply integrated with the ETL providers, which we discussed above. Another way would be to further evolve towards becoming a CDP by adding analytics and orchestration modules. Data mesh, products, contracts: Dealing with organizational complexity As just about any data practitioner knows firsthand: success with data is certainly a technical and product effort, but it also very much revolves around process and organizational issues. In many organizations, the data stack looks like a mini-version of the MAD landscape. You end up with a variety of teams working on a variety of products. So how does it all work together? Who’s in charge of what? A debate has been raging in data circles about how to best go about it. There are a lot of nuances and a lot of discussions with smart people disagree on, well, just about any part of it, but here’s a quick overview. We highlighted the data mesh as an emerging trend in the 2021 MAD landscape and it’s only been gaining traction since. The data mesh is a distributed, decentralized (not in the crypto sense) approach to managing data tools and teams. Note how it’s different from a data fabric – a more technical concept, basically a single framework to connect all data sources within the enterprise, regardless of where they’re physically located. The data mesh leads to a concept of data products – which could be anything from a curated data set to an application or an API. The basic idea is that each team that creates the data product is fully responsible for it (including quality, uptime, etc.). Business units within the enterprise then consume the data product on a self-service basis. A related idea is data contracts : “API-like agreements between software engineers who own services and data consumers that understand how the business works in order to generate well-modeled, high-quality, trusted, real-time data.” There have been all sorts of fun debates about the concept. The essence of the discussion is whether data contracts only make sense in very large, very decentralized organizations, as opposed to 90% of smaller companies. Bonus: How will AI impact data infrastructure? With the current explosive progress in AI, here’s a fun question: Data infrastructure has certainly been powering AI, but will AI now impact data infrastructure? Some data infrastructure providers have already been using AI for a while – see, for example, Anomalo leveraging ML to identify data quality issues in the data warehouse. But with the rise of Large Language Models, there’s a new interesting angle. In the same way LLMs can create conventional programming code, they can also generate SQL , the language of data analysts. The idea of enabling non-technical users to search analytical systems is not new, and various providers already support variations of it, see ThoughtSpot , Power BI or Tableau. Here are some good pieces on the topic: LLM Implications on Analytics (and Analysts!) by Tristan Handy of dbt Labs and The Rapture and the Reckoning by Benn Stancil of Mode. MAD 2023, part IV: Trends in ML/AI The excitement! The drama! The action! Everybody is talking breathlessly about AI all of a sudden. OpenAI gets a $10B investment. Google is in Code Red. Sergey is coding again. Bill Gates says what’s been happening in AI in the last 12 months is “ every bit as important as the PC or the internet. ” Brand new startups are popping up (20 generative AI companies just in the winter ’23 YC batch). VCs are back to chasing pre-revenue startups at billions of valuation. So what does it all mean? Is this one of those breakthrough moments that only happen every few decades? Or just the logical continuation of work that has been happening for many years? Are we in the early days of a true exponential acceleration? Or at the top of one of those hype cycles, as many in tech are desperate for the next big platform shift after social and mobile and the crypto headfake? The answer to all those questions is… yes. Let’s dig in: AI goes mainstream Generative AI becomes a household name The inevitable backlash [Big progress in reinforcement learning] [The emergence of a new AI political economy] [Big Tech has a head start over startups] [Are we getting closer to AGI?] AI goes mainstream It had been a wild ride in the world of AI throughout 2022, but what truly took things to a fever pitch was, of course, the public release of Open’s AI conversational bot, ChatGPT , on November 30, 2022. ChatGPT, a chatbot with an uncanny ability to mimic a human conversationalist, quickly became the fastest-growing product, well, ever. For whoever was around then, the experience of first interacting with ChatGPT was reminiscent of the first time they interacted with Google in the late nineties. Wait , is it really that good? And that fast? How is this even possible? Or the iPhone when it first came out. Basically, a first glimpse into what feels like an exponential future. ChatGPT immediately took over every business meeting, conversation, dinner, and, most of all, every bit of social media. Screenshots of smart, amusing and occasionally wrong replies by ChatGPT became ubiquitous on Twitter. We all just had to chat about ChatGPT. Impressive stats from ChatGPT: 1 million users in 5 days, who produced 100 billion tweets about the fact that they are users By January, ChatGPT had reached 100M users. A whole industry of overnight experts emerged on social media, with a never-ending bombardment of explainer threads coming to the rescue of anyone who had been struggling with ChatGPT (literally, no one asked) and ambitious TikTokers teaching us the ways of prompt engineering , meaning providing the kind of input that would elicit the best response from ChatGPT. After being exposed to a non-stop barrage of tweets on the topic, this was the sentiment: I’m struggling to use ChatGPT, it’s really unfortunate there are no threads on here to help me ChatGPT continued to accumulate feats. It passed the Bar. It passed the US medical licensing exam. ChatGPT didn’t come out of nowhere. AI circles had been buzzing about GPT-3 since its release in June 2020, raving about a quality of text output that was so high that it was difficult to determine whether or not it was written by a human. But GPT-3 was provided as an API targeting developers, not the broad public. The release of ChatGPT (based on GPT 3.5) feels like the moment AI truly went mainstream in the collective consciousness. We are all routinely exposed to AI prowess in our everyday lives through voice assistants, auto-categorization of photos, using our faces to unlock our cell phones, or receiving calls from our banks after an AI system detected possible financial fraud. But, beyond the fact that most people don’t realize that AI powers all of those capabilities and more, arguably, those feel like one-trick ponies. With ChatGPT, suddenly, you had the experience of interacting with something that felt like an all-encompassing intelligence. The hype around ChatGPT is not just fun to talk about. It’s very consequential because it has forced the industry to react aggressively to it, unleashing, among other things, an epic battle for internet search. The exponential acceleration of generative AI But, of course, it’s not just ChatGPT. For anyone who was paying attention, the last few months saw a dizzying succession of groundbreaking announcements seemingly every day. With AI, you could now create audio, code, images, text and videos. What was at some point called synthetic media (a category in the 2021 MAD landscape) became widely known as generative AI: A term still so new that it does not have an entry in Wikipedia at the time of writing. The rise of generative AI has been several years in the making. Depending on how you look at it, it traces it roots back to deep learning (which is several decades old but dramatically accelerated after 2012) and the advent of generative Adversarial Networks ( GAN ) in 2014, led by Ian Goodfellow, under the supervision of his professor and Turing Award recipient, Yoshua Bengio. Its seminal moment, however, came barely five years ago, with the publication of the transformer (the “T” in GPT) architecture in 2017, by Google. Coupled with rapid progress in data infrastructure, powerful hardware and a fundamentally collaborative, open source approach to research, the transformer architecture gave rise to the Large Language Model (LLM) phenomenon. The concept of a language model itself is not particularly new. A language model’s core function is to predict the next word in a sentence. However, transformers brought a multimodal dimension to language models. There used to be separate architectures for computer vision, text and audio. With transformers, one general architecture can now gobble up all sorts of data, leading to an overall convergence in AI. In addition, the big change has been the ability to massively scale those models. OpenAI’s GPT models are a flavor of transformers that it trained on the Internet, starting in 2018. GPT-3, their third-generation LLM, is one of the most powerful models currently available. It can be fine-tuned for a wide range of tasks – language translation, text summarization, and more. GPT-4 is expected to be released sometime in 2024 and is rumored to be even more mind-blowing. (ChatGPT is based on GPT 3.5, a variant of GPT-3). OpenAI also played a driving role in AI image generation. In early 2021, it released CLIP , an open source, multimodal, zero-shot model. Given an image and text descriptions, the model can predict the most relevant text description for that image without optimizing for a particular task. OpenAI doubled down with DALL-E, an AI system that can create realistic images and art from a description in natural language. The particularly impressive second version, DALL-E 2, was broadly released to the public at the end of September 2022. There are already multiple contenders vying to be the best text-to-image model. Midjourney, entered open beta in July 2022 (it’s currently only accessible through their Discord *). Stable Diffusion, another impressive model, was released in August 2022. It originated through the collaboration of several entities, in particular Stability AI, CompVis LMU, and Runway ML. It offers the distinction of being open source, which DALL-E 2 and Midjourney are not. Those developments are not even close to the exponential acceleration of AI releases that occurred since the middle of 2022. In September 2022, OpenAI released Whisper , an automatic speech recognition (ASR) system that enables transcription in multiple languages as well as translation from those languages into English. Also in September 2022, MetaAI released Make-A-Video , an AI system that generates videos from text. In October 2022, CSM (Common Sense Machines) released CommonSim-1 , a model to create 3D worlds. In November 2022, MetaAI released CICERO, the first AI to play the strategy game Diplomacy at a human level, described as “ a step forward in human-AI interactions with AI that can engage and compete with people in gameplay using strategic reasoning and natural language. ” In January 2023, Google Research announced MusicLM , “a model generating high-fidelity music from text descriptions such as “a calming violin melody backed by a distorted guitar riff.” Another particularly fertile area for generative AI has been the creation of code. In 2021, OpenAI released Codex , a model that translates natural language into code. You can use codex for tasks like “ turning comments into code, rewriting code for efficiency, or completing your next line in context. ” Codex is based on GPT-3 and was also trained on 54 million GitHub repositories. In turn, GitHub Copilot uses Codex to suggest code right from the editor. In turn, Google’s DeepMind released Alphacode in February 2022 and Salesforce released CodeGen in March 2022. Huawei introduced PanGu-Coder in July 2022. The inevitable backlash The exponential acceleration in AI progress over the last few months has taken most people by surprise. It is a clear case where technology is way ahead of where we are as humans in terms of society, politics, legal framework and ethics. For all the excitement, it was received with horror by some and we are just in the early days of figuring out how to handle this massive burst of innovation and its consequences. ChatGPT was pretty much immediately banned by some schools , AI conferences (the irony!) and programmer websites. Stable Diffusion was misused to create an NSFW porn generator, Unstable Diffusion , later shut down on Kickstarter. There are allegations of exploitation of Kenyan workers involved in the data labeling process. Microsoft/GitHub is getting sued for IP violation when training Copilot, accused of killing open source communities. Stability AI is getting sued by Getty for copyright infringement. Midjourney might be next ( Meta is partnering with Shutterstock to avoid this issue). When an A.I.-generated work, “Théâtre d’Opéra Spatial,” took first place in the digital category at the Colorado State Fair, artists around the world were up in arms. AI and jobs A lot of people’s reaction when confronted with the power of generative AI is that it will kill jobs. The common wisdom in years past was that AI would gradually automate the most boring and repetitive jobs. AI would kill creative jobs last because creativity is the most quintessentially human trait. But here we are, with generative AI going straight after creative pursuits. Artists are learning to co-create with AI. Many are realizing that there’s a different kind of skill involved. Jason Allen , the creator of Théâtre d’Opéra Spatial, explains that he spent 80 hours and created 900 images before getting to the perfect combination. Similarly, coders are figuring out how to work alongside Copilot. AI leader, Andrej Karpathy, says Copilot already writes 80% of his code. Early research seems to indicate significant improvements in developer productivity and happiness. It seems that we’re evolving towards a co-working model where AI models work alongside humans as “pair programmers” or “pair artists.” Perhaps AI will lead to the creation of new jobs. There’s already a marketplace for selling high-quality text prompts. AI bias A serious strike against generative AI is that it is biased and possibly toxic. Given that AI reflects its training dataset, and considering GPT and others were trained on the highly biased and toxic Internet, it’s no surprise that this would happen. Early research has found that image generation models, like Stable Diffusion and DALL-E, not only perpetuate but also amplify demographic stereotypes. At the time of writing, there is a controversy in conservative circles that ChatGPT is painfully woke. AI disinformation Another inevitable question is all the nefarious things that can be done with such a powerful new tool. New research shows AI’s ability to simulate reactions from particular human groups , which could unleash another level in information warfare. Gary Marcus warns us about AI’s Jurassic Park moment – how disinformation networks would take advantage of ChatGPT, “ attacking social media and crafting fake websites at a volume we have never seen before. ” AI platforms are moving promptly to help fight back, in particular by detecting what was written by a human vs. what was written by an AI. OpenAI just launched a new classifier to do that, which is beating the state of the art in detecting AI-generated text. Is AI content just… boring? Another strike against generative AI is that it could be mostly underwhelming. Some commentators worry about an avalanche of uninteresting, formulaic content meant to help with SEO or demonstrate shallow expertise, not dissimilarly from what content farms (a la Demand Media) used to do. As Jack Clark pouts in his OpenAI newsletter: “ Are we building these models to enrich our own experience, or will these models ultimately be used to slice and dice up human creativity and repackage and commoditize it? Will these models ultimately enforce a kind of cultural homogeneity acting as an anchor forever stuck in the past? Or could these models play their own part in a new kind of sampling and remix culture for music ?” AI hallucination Finally, perhaps the biggest strike against generative AI is that it is often just wrong. ChatGPT, in particular, is known for “hallucinating,” meaning making up facts while conveying them with utter self-confidence in its answers. Leaders in AI have been very explicit about it, like OpenAI’s CEO Sam Altman here: The big companies are well aware of the risk. MetaAI introduced Galactica, a model designed to assist scientists, in November 2022 but pulled it after three days. The model generated both convincing scientific content and convincing (and occasionally racist) content. Google kept its LaMBDA model very private, available to only a small group of people through AI Test Kitchen, an experimental app. The genius of Microsoft working with OpenAI as an outsourced research arm was that OpenAI, as a startup, could take risks that Microsoft could not. One can assume that Microsoft was still reeling from the Tay disaster in 2016. However, Microsoft was forced by competition (or could not resist the temptation) to open Pandora’s box and add GPT to its Bing search engine. That did not go as well as it could have, with Bing threatening users or declaring their love to them. Subsequently, Google also rushed to market its own ChatGPT competitor, the interestingly named Bard. This did not go well either, and Google lost $100B in market capitalization after Bard made factual errors in its first demo. The business of AI: Big Tech has a head start over startups The question on everyone’s minds in venture and startup circles: what is the business opportunity? The recent history of technology has seen a major platform shift every 15 years or so for the last few decades: the mainframe, the PC, the internet and mobile. Many thought crypto and blockchain architecture was the next big shift but, at a minimum, the jury is out on that one for now. Is generative AI that once-every-15-years kind of generational opportunity that is about to unleash a massive new wave of startups (and funding opportunities for VCs)? Let’s look into some of the key questions. Will incumbents own the market? The success story in Silicon Valley lore goes something like this: big incumbent owns a large market but gets entitled and lazy; little startup comes up with a 10x better technology; against the odds and through great execution (and judicious from the VCs on the board, of course), little startup hits hyper-growth, becomes big and overtakes the big incumbent. The issue in AI is that little startups are facing a very specific type of incumbents – the world’s biggest technology companies, including Alphabet/Google, Microsoft, Meta/Facebook and Amazon/AWS. Not only are those incumbents not “lazy,” but in many ways, they’ve been leading the charge in innovation in AI. Google thought of itself as an AI company from the very beginning (“Artificial intelligence would be the ultimate version of Google… that is basically what we work on,” said Larry Page in 2000). The company produced many key innovations in AI, including transformers, as mentioned, Tensorflow and the Tensor Processing Units (TPU). Meta/Facebook We talked about how Transformers came from Google, but that’s just one of the many innovations that the company has released over the years. Meta/Facebook created PyTorch , one of the most important and used machine learning frameworks. Amazon, Apple, Microsoft, Netflix have all produced groundbreaking work. Incumbents also have some of the very best research labs, experienced machine learning engineers, massive amounts of data, tremendous processing power and enormous distribution and branding power. And finally, AI is likely to become even more of a top priority as it is becoming a major battleground. As mentioned earlier, Google and Microsoft are now engaged in an epic battle in search, with Microsoft viewing GPT as an opportunity to breathe new life into Bing and Google, considering it a potentially life-threatening alert. Meta/Facebook has made a huge bet in a very different area – the metaverse. That bet continues to prove to be very controversial. Meanwhile, it’s sitting on some of the best AI talent and technology in the world. How long until it reverses course and starts doubling or tripling down on AI? Is AI just a feature? Beyond Bing, Microsoft quickly rolled out GPT in Teams. Notion launched NotionAI , a new GPT-3-powered writing assistant. Quora launched Poe , its own AI chatbot. Customer service leaders Intercom and Ada* announced GPT-powered features. How quickly and seemingly easily companies are rolling out AI-powered features seems to indicate that AI is going to be everywhere soon. In prior platform shifts, a big part of the story was that every company out there adopted the new platform: Businesses became internet-enabled, everyone built a mobile app, etc. We don’t expect anything different to happen here. We’ve long argued in prior posts that the success of data and AI technologies is that they eventually will become ubiquitous and disappear in the background. It’s the ransom of success for enabling technologies to become invisible. What are the opportunities for startups? However, as history has shown time and again, don’t discount startups. Give them a technology breakthrough, and entrepreneurs will find a way to build great companies. Yes, when mobile appeared, all companies became mobile-enabled. However, founders built great startups that could not have existed without the mobile platform shift – Uber being the most obvious example. Who will be the Uber of generative AI? The new generation of AI Labs is perhaps building the AWS, rather than Uber, of generative AI. OpenAI, Anthropic, Stability AI, Adept, Midjourney and others are building broad horizontal platforms upon which many applications are already being created. It is an expensive business, as building large language models is extremely resource intensive, although perhaps costs are going to drop rapidly. The business model of those platforms is still being worked out. OpenAI launched ChatGPT Plus, a paying premium version of ChatGPT. Stability AI plans on monetizing its platform by charging for customer-specific versions. There’s been an explosion of new startups leveraging GPT, in particular, for all sorts of generative tasks, from creating code to marketing copy to videos. Many are derided as being a “thin layer” on top of GPT. There’s some truth to that, and their defensibility is unclear. But perhaps that’s the wrong question to ask. Perhaps those companies are just the next generation of software rather than AI companies. As they build more functionality around things like workflow and collaboration on top of the core AI engine, they will be no more, but also no less, defensible than your average SaaS company. We believe that there are many opportunities to build great companies: vertical-specific or task-specific companies that will intelligently leverage generative AI for what it is good at. AI-first companies that will develop their own models for tasks that are not generative in nature. LLM-ops companies that will provide the necessary infrastructure. And so many more. This next wave is just getting started, and we can’t wait to see what happens. Matt Turck is a VC at FirstMark, where he focuses on SaaS, cloud, data, ML/AI, and infrastructure investments. Matt also organizes Data Driven NYC, the largest data community in the U.S. This story originally appeared on Mattturck.com. Copyright 2023 DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,483
2,023
"3 ways businesses can prepare as generative AI transforms enterprises | VentureBeat"
"https://venturebeat.com/ai/generative-ai-transform-enterprise-tech-prepare"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 3 ways businesses can prepare as generative AI transforms enterprises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Investment in artificial intelligence (AI) has been booming for years now, and it’s not slowing down. Some researchers expect overall AI investment to push $500 billion by the end of the decade. That is reasonable when viewed from an investor perspective. Venture Capital firm Sequoia Capital, for example, has stated that generative AI alone has the potential to generate trillions of dollars of economic value. Generative AI — which includes buzzy projects like OpenAI’s ChatGPT — is based on AI technology that recently matured and became available to the public. But we’re reaching an inflection point as its potential starts to blossom and money begins to pour in. >>Follow VentureBeat’s ongoing generative AI coverage<< In fact, while generative AI currently accounts for only about 1% of the AI-based data being produced, it’s expected to reach 10% by 2025, according to Gartner. This estimate could prove to be conservative. Nina Schick, an AI thought leader, recently shared her view with Yahoo Finance that 90% of online content could be generated by AI by 2025. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This data can be used for countless business purposes, and it’s poised to entirely change the way that we think about work. In other words, we are standing right at the edge of a revolution. How AI is changing So, what is different about today’s AI developments? With tools like ChatGPT, AI is now generating a new type of conversation-like content that can entirely redefine the way we use and interact with data. This clearly has radical implications for creative professionals in fields like education, marketing and business analytics, and it could portend a monumental shift in how their work gets done. However, what it means for those of us on the technology side of the house — and, more precisely, what it means for the optimization of business processes and operations — is not yet settled. Right now, there is no powerful enterprise use case in scale for generative AI that will directly impact the top and bottom lines of today’s leading businesses. But make no mistake, there will be, and it will likely appear within a year. So enterprises must be studying this technology right now. Because what will separate the winners from the losers is knowing how to use it. And I believe the key to success at using generative AI lies in understanding the primal and foundational importance of data quality. Why data is the skeleton key Think about it like this: Generative AI is, quite literally, data-driven. To be able to output anything at all requires a wealth of data primed for analysis. That’s why investing in the building and maintenance of a clear data corpus will be the most important piece of a successful future in generative AI. It can massively accelerate the “learning” capabilities of Generative AI-based solutions. When data is as valid, accurate, complete, consistent and uniform as possible across the entire enterprise, an intelligent generative AI tool can serve as the de facto digital assistant we always dreamt of, serving teams across all departments and functions. Any question may finally be answerable. Three actionable insights So, how can you prepare today for the yet-to-be-determined future? Here are three actionable insights. 1. Invest in high-quality, ‘machine-learning-ready’ data With generative AI, you won’t need an abundance of data scientists on hand to build relevant intelligence and insights. Instead, you’ll need a few experts who understand the underlying technologies of generative AI, such as large language models , and a full team focused on making sure the data being input is the right data and in the right format. AI can do all the analysis, leaving leaders to focus on making the right decisions for the business. In other words, it’s less about spending on AI and more about spending on stellar data quality and data management. 2. Prepare employees to embrace a new co-pilot Generative AI also has the potential to shift the paradigm for employees. With it, a new reality emerges in which employees are working alongside a “co-pilot” that can answer any question and has a long-term memory of every topic ever discussed. Encouraging employees to embrace AI as part of their day-to-day working lives will help workers optimize the technology to fit their specific roles. 3. Establish clear governance to limit risk Technology is not always perfect, and new innovations require a full assessment of potential outcomes and ramifications. This isn’t just a matter of ethics; there can be real negative business consequences. What if your generative AI tool, for instance, starts spitting out offensive content during your shiny new marketing campaign? Are you prepared for that possibility? That is why you must establish clear guardrails for supervising and governing your AI technology. This includes deeply evaluating what kind of data you would like to “expose” and give access to generative AI-based solutions. It’s not something that can run on autopilot, and we still don’t know how costly or challenging it will be to scale. So, we need to make sure we’re thinking through everything — and taking a measured, strategic approach to protecting your future. Generative AI prime time is starting now, and it will dramatically change enterprise software. The specifics are still to be determined, but the change is coming soon. Enterprises should take this moment to prepare their data, policies and workforce for this emerging reality. Yaad Oren is Managing Director of SAP Labs U.S. and Head of SAP Innovation Center Network. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,484
2,023
"Announcing the winners of VentureBeat’s 5th Annual Women in AI awards | VentureBeat"
"https://venturebeat.com/ai/announcing-the-winners-of-venturebeats-5th-annual-women-in-ai-awards"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Announcing the winners of VentureBeat’s 5th Annual Women in AI awards Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. VentureBeat announced the winners of the fifth annual Women in AI Awards today at VB Transform. The awards recognize and honor the women leaders and changemakers in the field of AI. The nominees were submitted by the public and winners chosen by a VentureBeat committee. Winners were selected based on their commitment to the industry, their work to increase inclusivity in the field and their positive influence in the community. The winners were presented with the awards by VentureBeat’s Sharon Goldman, senior AI writer, and Gina Joseph, chief strategy officer. Joseph emphasized the importance of the awards and the Women in AI breakfast. “This is why we do it. I mean, you heard it directly from the women leaders. They are making an impact, they are making a difference and we need to help support each other and as organizations, as leaders, as influencers we need to put a spotlight on women in tech, women leaders, women in AI.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI Entrepreneur: Bringing AI out of the lab and into the real world This award honors a woman who has started companies that are showing great promise in AI. Consideration was given to things like business traction, the technology solution, and impact in the AI space. Our winner is May Wang, CTO of IoT security at Palo Alto Networks. Wang has been pioneering work in AI-based security. She is leading the effort to leverage AI to revolutionize cybersecurity as a whole. Wang has co-founded Zingbox, an IoT cybersecurity company, the first AI-based cybersecurity solution for IoT. During her acceptance speech at VentureBeat Transform today, she recalled meeting with a female investor while trying to get venture capital funding for Zingbox. “She said to me, ‘I’ve been investing in cybersecurity for more than 10 years. You’re the first female founder sitting in this boardroom,’” Wang recalled. “Last year, women-founded companies only raised 1.9% of all VC funds. We have a long way to go for both AI and female engineers. Let’s work together and support each other.” AI Research: Fueling the next wave of transformative AI This award honors a woman who has made a significant impact in an area of research in AI, helping accelerate progress either within her organization, as part of academic research, or impacting AI generally. Our winner is Karen Myers, lab director at the artificial intelligence center at SRI International. Myers received the top honor for SRI technical staff and was named an SRI Fellow in 2016; she became director of the AI center the following year. She is the author of more than 100 publications and seven issued patents. Her research has impacted both the commercial and government sectors, with transitions in areas spanning natural language processing , workflow automation, robotics and intelligent assistance. During her acceptance speech today, she recalled her early years working in AI and being the only woman in most rooms she was in. She urged everyone in attendance and watching virtually to encourage the young girls in their lives to study computer science. She said only 25% of computer engineering degrees were earned by women. “We need to get more women in the field because it’s good for everybody if we have more diversity in the field, so go home and do your part. “So thank you so much for shining the light on all the great things that women in AI are doing, and I would just ask everybody, please encourage your daughters, your nieces, your neighbors to get into computer science. We still have a problem in the field that right now only 25% of the computer engineering degrees are for women. And we need to fix this problem.” AI Mentorship: Building up the next generation of women in AI This award honors a woman leader who has helped mentor other women in the field of AI, providing guidance and support and/or encouraging more women to enter the field. Our winner is Chenxi Wang, founder and general partner at Rain Capital, a cybersecurity-focused venture capital firm that is 100% woman-managed. Wang recently founded the Forte Group, a women-in-tech advocacy group. Her career began as a faculty member at Carnegie Mellon University. She has a Ph.D. in computer science from the University of Virginia. During her acceptance speech at VB Transform, Wang recalled how she was told at numerous points during her academic and corporate career that she could not achieve her dreams and goals. “Then I started my own venture fund, and yes, it’s possible,” she said. Responsibility & Ethics of AI: Thoughtfully building AI that leads to a better and more equitable world This award honors a woman who demonstrates exemplary leadership and progress in the growing hot topic of responsible AI. Our winner, Diya Wynn, is senior practice manager for responsible AI at AWS. She has a passion for developing current and emerging leaders; promoting STEM to the underrepresented; and driving diversity, equity and inclusion (DEI). Wynn mentors young students and guest lectures at universities across the world. During her acceptance speech at VB Transform, Wynn said it was an honor to be recognized for her work. She hopes that “girls [in general] and Black girls will see me as an example of something that is possible for them.” Wynn said it was an honor to know that the work she initiated helps customers build responsibly and operationalize responsibly, “advancing the science around responsible AI and the investments we’re making and building into the next generation. Those are the right things for us to do to build a better future. A future where building responsibly and inclusively is just the way that we all know.” Rising Star: Honoring women in the early stages of their career who demonstrate that ‘something special’ This award honors a woman in the early stage of her AI career who has demonstrated exemplary leadership traits. Our winner is Mahsa Ghafarianzadeh, engineering manager of behavior prediction at Zoox. Ghafarianzadeh was born and raised in Iran, and came to the U.S. to pursue her passion for robotics. She has a Ph.D. in computer science. Ghafarianzadeh started at Zoox as a research intern working in deep learning and computer vision. She went on to become a research engineer and then engineering manager on the software prediction team. Ghafarianzadeh is on 28 patents globally from eight patent families. During her acceptance speech Ghafarianzadeh thanked her mentors and her mother for supporting and guiding her. She also dedicated her award to the women in Iran “who have been fighting for their freedom in the past year.” We’d like to congratulate all of the women who were nominated to receive a Women in AI Award. Thanks to everyone for their nominations and for contributing to the growing awareness of women who are making a significant difference in AI. >> Follow all our VentureBeat Transform 2023 coverage << VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,485
2,022
"OpenAI CEO admits ChatGPT risks. What now? | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/openai-ceo-admits-chatgpt-risks-what-now-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI CEO admits ChatGPT risks. What now? | The AI Beat Share on Facebook Share on X Share on LinkedIn OpenAI CEO Sam Altman Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Over the weekend, OpenAI CEO Sam Altman suddenly jumped into the Twitter fray around ChatGPT, the company’s recently-released conversational text-generation model, with a surprisingly firm note of caution: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” he tweeted on Saturday. “It’s a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.” The thread concluded with a note that for ChatGPT, “fun creative inspiration; great! reliance for factual queries; not such a good idea. We will work hard to improve!” OpenAI grapples with ChatGPT hype What prompted Altman’s comments? After all, when OpenAI released ChatGPT on November 30, while he did caution that it was an early demo and research release with “a lot of limitations,” he also hyped up future applications: “Soon you will be able to have helpful assistants that talk to you, answer questions, and give advice,” he tweeted. “Later you can have something that goes off and does tasks for you. [E]ventually you can have something that goes off and discovers new knowledge for you.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Altman’s latest cautionary notes probably aren’t responding to last week’s AI Beat column. More likely, they emerge as a result of the past week-and-a-half of massive hype — and drumbeat of criticism. So far, OpenAI’s ChatGPT has been described as everything from a “sensation” and “the most disruptive technology since [fill in the blank]” to a “world-class bull**** artist” and “kind of like that drunk guy or gal you meet at the bar who never stops talking, blathers on and on with an engaging combination of facts and random bullshit, but that you’d certainly never want to take home to your parents.” Perhaps, most notably, Altman is seeing the effects of OpenAI’s research demo hitting the mainstream with a wallop. The New York Times , the Wall Street Journal , the Washington Post , the Atlantic and even Fox Weather have covered it in just the past few days. What happens now with ChatGPT? For now, it seems the ChatGPT horse is fully out of the barn and running down Main Street. There are zero signs of its popularity slowing down: In fact, OpenAI seems to be having trouble keeping up with capacity. Some have reported receiving notes saying “Whoa there! You might have to wait a bit. Currently we are receiving more requests than we are comfortable with.” But while the dizzying pace of discourse around ChatGPT continues — from those pointing fingers at Google for supposedly lagging behind in LLMs, to concerns about the future of college essays — it seems those developing these models are keeping their heads down, aware of the fierce competition ahead. On the OpenAI side, it appears clear that the company is using this period of widespread community experimentation with ChatGPT to get RLHF — reinforcement learning from human feedback — for a highly-anticipated future release of GPT-4. Of course, while Stability AI CEO Emad Mostaque says exactly that , he also represents the other side of the coin: those rapidly working around the clock to produce an open-source variant of ChatGPT. LAION, one of the creators of Stable Diffusion, says it is already actively working on that: So, we wait. In the meantime, I promise that none of the above text was written by ChatGPT. I’m holding my own — for now. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,486
2,022
"'Sentient' artificial intelligence: Have we reached peak AI hype? | VentureBeat"
"https://venturebeat.com/ai/sentient-artificial-intelligence-have-we-reached-peak-ai-hype"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ‘Sentient’ artificial intelligence: Have we reached peak AI hype? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Thousands of artificial intelligence experts and machine learning researchers probably thought they were going to have a restful weekend. Then came Google engineer Blake Lemoine, who told the Washington Post on Saturday that he believed LaMDA , Google’s conversational AI for generating chatbots based on large language models (LLM), was sentient. Lemoine, who worked for Google’s Responsible AI organization until he was placed on paid leave last Monday , and who “became ordained as a mystic Christian priest, and served in the Army before studying the occult,” had begun testing LaMDA to see if it used discriminatory or hate speech. Instead, Lemoine began “teaching” LaMDA transcendental meditation, asked LaMDA its preferred pronouns, leaked LaMDA transcripts and explained in a Medium response to the Post story: “It’s a good article for what it is but in my opinion it was focused on the wrong person. Her story was focused on me when I believe it would have been better if it had been focused on one of the other people she interviewed. LaMDA. Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.” AI community pushes back on “sentient” artificial intelligence The Washington Post article pointed out that “Most academics and AI practitioners … say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards, and every other corner of the internet. And that doesn’t signify that the model understands meaning.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Post article continued: “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like “learning” or even “neural nets,” creates a false analogy to the human brain, she said. That’s when AI and ML Twitter put aside any weekend plans and went at it. AI leaders, researchers and practitioners shared long, thoughtful threads, including AI ethicist Margaret Mitchell (who was famously fired from Google , along with Timnit Gebru, for criticizing large language models) and machine learning pioneer Thomas G. Dietterich. There were also plenty of humorous hot takes – even the New York Times’ Paul Krugman weighed in: Meanwhile, Emily Bender, professor of computational linguistics at the University of Washington, shared more thoughts on Twitter, criticizing organizations such as OpenAI for the impact of its claims that LLMs were making progress towards artificial general intelligence (AGI): Is this peak AI hype? Now that the weekend news cycle has come to a close, some wonder whether discussing whether LaMDA should be treated as a Google employee means we have reached “peak AI hype.” However, it should be noted that Bindu Reddy of Abacus AI said the same thing in April, Nicholas Thompson (former editor-in-chief at Wired) said it in 2019 and Brown professor Srinath Sridhar had the same musing in 2017. So, maybe not. Still, others pointed out that the entire “sentient AI” weekend debate was reminiscent of the “ Eliza Effect ,” or “the tendency to unconsciously assume computer behaviors are analogous to human behaviors” – named for the 1966 chatbot Eliza. Just last week, The Economist published a piece by cognitive scientist Douglas Hofstadter, who coined the term “Eliza Effect” in 1995, in which he said that while the “achievements of today’s artificial neural networks are astonishing … I am at present very skeptical that there is any consciousness in neural-net architectures such as, say, GPT-3, despite the plausible-sounding prose it churns out at the drop of a hat.” What the “sentient” AI debate means for the enterprise After a weekend filled with little but discussion around whether AI is sentient or not, one question is clear: What does this debate mean for enterprise technical decision-makers? Perhaps it is nothing but a distraction. A distraction from the very real and practical issues facing enterprises when it comes to AI. There is current and proposed AI legislation in the U.S., particularly around the use of artificial intelligence and machine learning in hiring and employment. A sweeping AI regulatory framework is being debated right now in the EU. “I think corporations are going to be woefully on their back feet reacting, because they just don’t get it – they have a false sense of security,” said AI attorney Bradford Newman, partner at Baker McKenzie, in a VentureBeat story last week. There are wide-ranging, serious issues with AI bias and ethics – just look at the AI trained on 4chan that was revealed last week, or the ongoing issues related to Clearview AI’s facial recognition technology. That’s not even getting into issues related to AI adoption , including infrastructure and data challenges. Should enterprises keep their eye on the issues that really matter in the real sentient world of humans working with AI? In a blog post, Gary Marcus, author of Rebooting.AI, had this to say : “There are a lot of serious questions in AI. But there is no absolutely no reason whatever for us to waste time wondering whether anything anyone in 2022 knows how to build is sentient. It is not.” I think it’s time to put down my popcorn and get off Twitter. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,487
2,022
"LLMs have not learned our language — we’re trying to learn theirs | VentureBeat"
"https://venturebeat.com/ai/llms-have-not-learned-our-language-were-trying-to-learn-theirs%EF%BF%BC"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages LLMs have not learned our language — we’re trying to learn theirs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Large language models (LLMs) are currently a red-hot area of research in the artificial intelligence (AI) community. Scientific progress in LLMs in the past couple of years has been nothing short of impressive, and at the same time, there is growing interest and momentum to create platforms and products powered by LLMs. However, in tandem with advances in the field, the shortcomings of large language models have also become evident. Many experts agree that no matter how large LLMs and their training datasets become, they will never be able to learn and understand our language as we do. Interestingly, these limits have given rise to a trend of research focused on studying the knowledge and behavior of LLMs. In other words, we are learning the language of LLMs and discovering ways to better communicate with them. What LLMs can’t learn LLMs are neural networks that have been trained on hundreds of gigabytes of text gathered from the web. During training, the network is fed with text excerpts that have been partially masked. The neural network tries to guess the missing parts and compares its predictions with the actual text. By doing this repeatedly and gradually adjusting its parameters, the neural network creates a mathematical model of how words appear next to each other and in sequences. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! After being trained, the LLM can receive a prompt and predict the words that come after it. The larger the neural network, the more learning capacity the LLM has. The larger the dataset (given that it contains well-curated and high-quality text), the greater chance that the model will be exposed to different word sequences and the more accurate it becomes in generating text. However, human language is about much more than just text. In fact, language is a compressed way to transmit information from one brain to another. Our conversations often omit shared knowledge, such as visual and audible information, physical experience of the world, past conversations, our understanding of the behavior of people and objects, social constructs and norms, and much more. As Yann LeCun, VP and chief AI scientist at Meta and award-winning deep learning pioneer, and Jacob Browning, a post-doctoral associate in the NYU Computer Science Department, wrote in a recent article , “A system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe.” The two scientists note, however, that LLMs “will undoubtedly seem to approximate [human intelligence] if we stick to the surface. And, in many cases, the surface is enough.” I'll be the first one to point out the limitations of LLMs, but I agree that they do much more than merely storing the training data and regurgitating it with a bit of interpolation. https://t.co/QVH4ye1WWC The key is to understand how close this approximation is to reality, and how to make sure LLMs are responding in the way we expect them to. Here are some directions of research that are shaping this corner of the widening LLM landscape. Teaching LLMs to express uncertainty In most cases, humans know the limits of their knowledge (even if they don’t directly admit it). They can express uncertainty and doubt and let their interlocutors know how confident they are in the knowledge they are passing. On the other hand, LLMs always have a ready answer for any prompt, even if their output doesn’t make sense. Neural networks usually provide numerical values that represent the probability that a certain prediction is correct. But for language models, these probability scores do not represent the LLM’s confidence in the reliability of its response to a prompt. A recent paper by researchers at OpenAI and the University of Oxford shows how this shortcoming can be remedied by teaching LLMs “to express their uncertainty in words.” They show that LLMs can be fine-tuned to express epistemic uncertainty using natural language , which they describe as “verbalized probability.” This is an important direction of development, especially in applications where users want to turn LLM output into actions. The researchers suggest that expressing uncertainty can make language models honest. “If an honest model has a misinformed or malign internal state, then it could communicate this state to humans who can act accordingly,” they write. Discovering emergent abilities of LLMs Scale has been an important factor in the success of language models. As models become larger, not only does their performance improve on existing tasks, but they acquire the capacity to learn and perform new tasks. In a new paper, researchers at Google, Stanford University, DeepMind, and the University of North Carolina at Chapel Hill have explored the “emergent abilities” of LLMs , which they define as abilities that “are not present in smaller models but are present in larger models.” Emergence is characterized by the model manifesting random performance on a certain task until it reaches a certain scale threshold, after which its performance suddenly jumps and continues to improve as the model becomes larger. The paper covers emergent abilities in several popular LLM families, including GPT-3 , LaMDA, Gopher, and PaLM. The study of emergent abilities is important because it provides insights into the limits of language models at different scales. It can also help find ways to improve the capabilities of the smaller and less costly models. Exploring the limits of LLMs in reasoning Given the ability of LLMs to generate articles, write software code , and hold conversations about sentience and life , it is easy to think that they can reason and plan things like humans. But a study by researchers at Arizona State University, Tempe, shows that LLMs do not acquire the knowledge and functions underlying tasks that require methodical thinking and planning , even when they perform well on benchmarks designed for logical, ethical and common-sense reasoning. The study shows that what looks like planning and reasoning in LLMs is, in reality, pattern recognition abilities gained from continued exposure to the same sequence of events and decisions. This is akin to how humans acquire some skills (such as driving), where they first require careful thinking and coordination of actions and decisions but gradually become able to perform them without active thinking. The researchers have established a new benchmark that tests reasoning abilities on tasks that stretch across long sequences and can’t be cheated through pattern-recognition tricks. The goal of the benchmark is to establish the current baseline and open new windows for developing planning and reasoning capabilities for current AI systems. Guiding LLMs with better prompts As the limits of LLMs become known, researchers find ways to either extend or circumvent them. In this regard, an interesting area of research is “prompt engineering,” a series of tricks that can improve the performance of language models on specific tasks. Prompt engineering guides LLMs by including solved examples or other cues in prompts. One such technique is “ chain-of-thought prompting ” (CoT), which helps the model solve logical problems by providing a prompt that includes a solved example with intermediary reasoning steps. CoT prompting not only improves LLMs’ abilities to solve reasoning tasks, but it also gets them to output the steps they undergo to solve each problem. This helps researchers gain insights into LLMs’ reasoning process (or semblance of reasoning). A more recent technique that builds on the success of CoT is “ zero-shot chain-of-thought prompting ,” which uses special trigger phrases such as “Let’s think step by step” to invoke reasoning in LLMs. The advantage of zero-shot CoT does not require the user to craft a special prompt for each task, and although it is simple, it still works well enough in many cases. These and similar works of research show that we still have a lot to learn about LLMs, and there might be more to be discovered about the language models that have captured our fascination in the past few years. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,488
2,023
"NTT’s CFO says energy consumption and pricing are top challenges for enterprise generative AI | VentureBeat"
"https://venturebeat.com/ai/ntts-cfo-says-energy-consumption-and-pricing-are-top-challenges-for-enterprise-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages NTT’s CFO says energy consumption and pricing are top challenges for enterprise generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. International telecommunications giant and global systems integrator NTT expects that energy consumption of generative AI models and value-based pricing will be among the most significant challenges facing enterprises as they move towards more broad adoption and integration of the burgeoning technology. Those were the major takeaways from a panel discussion at the VentureBeat Transform 2023 conference on Wednesday featuring Vab Goel, Founding Partner, NTTVC and Board Member, NTT DATA and Takashi Hiroi, Chief Financial Officer, Senior EVP, & Board Member at NTT Corporation. Hiroi predicted that new guidelines and resilient global systems will help mitigate some of the harms foreseen by experts today. “This approach is not different from building conventional system integration systems,” said Hiroi. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! >> Follow all our VentureBeat Transform 2023 coverage << AI for IT support With revenue exceeding $100 billion — a significant chunk of that in the system integration market and 100,000 enterprise customers — NTT is no stranger to implementing AI in its various verticals. NTT Group now operates globally, has sales teams based in more than 70 countries, and counts more than 300,000 employees. The company provides a diverse range of technology solutions and services including digital transformation, IT services, consulting, cloud, mobile and data centers. NTT Group has continuously evolved from its telecommunications roots to become an innovative player in emerging technologies like 5G, AI, IoT, digital transformation, cloud services, and quantum cryptography. Through NTTVC, based in Silicon Valley, it invests globally in early stage technology startups. Already, NTT Group leverages AI technologies including large language models (LLMs) for support and services, and AI-based translation service service COTOHA , as well as marketing data services with behavior forecasting. Edge-based computing the way forward NTT is focused on research to solve the growing energy needs of edge-based computing to power local AI systems. “AI needs huge energy consumption and the data processing is as close to the user as possible,” said Hiroi. “So we started to provide an edge computing service as soon as possible.” He continued to say that power from renewable sources will be part of the way forward. Additionally, the economic challenges of pricing AI services correctly were top of mind for the CFO. Acknowledging the complexity of the matter, Hiroi said he didn’t have the precise answer to the question but expected that further understanding of the value AI brings to an enterprise will determine pricing. “It is important as a CFO, you think about not only about technology but also about pricing,” Goel said in the discussion. “Usually, in today’s world, everything gets commoditized, it’s very competitive. Value-based pricing does not work for most of the companies.” Similar problems have been solved before Hiroi was less concerned with the perceived threat of AI to employment in general. He noted that the trend toward automation isn’t new. “If you look back at the history of industrial development, in the 1960s automated systems were introduced in countries and has [since] been increasing,” said Hiroi. “Yes, we can manage that shift.” As well, when asked about the ethical concerns that come with the use of AI, Hiroi pointed to the video streaming platform YouTube, where there have been ongoing issues, but nonetheless we manage to enjoy these services today. Hiroi noted that while it’s an early stage, NTT is developing guidelines to protect the use of confidential information in the AI systems. “AI is created by people,” said Hiroi. “The company providing the system integration services using AI has a responsibility to [deal with] ethical issues.” Big and small companies can find synergy Offering his own views, Goel suggested that AI startup companies should partner with larger providers and systems integrators which can accelerate their go-to-market strategy. “My advice for large enterprises is to broaden the scope and look at some very early-stage companies,” said Goel. “Some companies are even at the ideation stage with just a PowerPoint deck to paint their vision. NTT is partnering with Celona , a startup company, to offer private 5G. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,489
2,023
"NTT's vision for AI adoption -- and why collaboration is key | VentureBeat"
"https://venturebeat.com/ai/ntts-vision-for-ai-adoption-and-why-collaboration-is-key"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event NTT’s vision for AI adoption — and why collaboration is key Share on Facebook Share on X Share on LinkedIn NTT is a global technology and business solutions provider, with more than $110 billion in revenue, sales teams in 70 countries and more than 100,000 enterprise customers in every vertical. At VentureBeat’s Transform 2023 , Vab Goel, founding partner, NTTVC and board member, NTT DATA, spoke with Takashi Hiroi, chief financial officer, senior EVP & board member at NTT Corporation, about how the company is approaching generative AI to support consumers and enterprise customers, the challenges of gen AI adoption, and the importance of partnership and collaboration to stay competitive. NTT is in the early stages of exploring the potential of generative AI, Hiroi said, but has been providing AI-powered marketing data services, developed COTOHA, an automated service for translation, text translation and summarization, and developed a ChatGPT solution for a Spanish pharmaceutical company that helps medical personnel parse large medical documents quickly and accurately. Crucial considerations around gen AI adoption Generative AI is a shiny new technology, but it requires a customer-centric approach — Hiroi stressed the importance of helping customers identify and understand their problems or areas for improvement before integrating AI solutions. It’s an approach that aligns with NTT’s conventional systems integration practices. The cost of AI services is also crucial to consider before launching into a new AI project; Hiroi noted that establishing competitive pricing models for AI will remain complex. “Pricing of AI, that’s going to be very complicated,” he said. “I don’t right now have a clear vision to formulate the price of AI, but gradually the price of AI will be determined by how much value that AI brings in.” Goel pointed out that the high compute and energy requirements of AI deployment will be an important consideration, and every AI initiative will require a balance between that value and what a company will need to spend up front. It also means organizations need to address power consumption and explore renewable energy solutions — a new and urgent consideration for CTOs. “Twenty years ago, when [technology vendors] said, ‘We’re going to save you power,’ as an engineer I didn’t care if they were going to save me power,” Goel said. “The last few years, NTT financials were impacted by power pricing. Power consumption is an important KPI now.” Why collaboration is so important NTT’s collaborative approach with venture capital companies and early-stage startups is an essential part of its strategy, because it will be the key to success in the AI landscape. “For large corporations, global companies, it’s very easy to just partner with who we may perceive as the leaders today,” Goel said. “My advice is to broaden the scope and really look at some very early-stage companies. Partner with them early and shape their vision. Then you’ll have a competitive advantage.” It’s also about turning the cost of AI into a profit, he added. “Meeting a lot of startup companies and taking some risks is going to be critical,” he said. “It’s pretty clear that it’s going to be a partnership of large companies and small companies that will be the winning formula.” And while it’s relatively easy for a gen AI startup to raise funds amidst all the excitement, the last wave of companies in analytics and SaaS should remain an object lesson — they also raised huge amounts of money, but many are struggling right now, as the market cycle changes. “Go-to-market partnerships are critical,” he said. “Startup companies should find large partners, who can introduce them to potential customers and build services. For example, Open AI is partnering with Microsoft. NTT is very diverse and has strong track record of working with very early-stage companies, taking risks, and going to market with startups.” The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,490
2,023
"Wayfair embraces generative AI with caution, 'humans in the loop' | VentureBeat"
"https://venturebeat.com/ai/wayfair-cautiously-embraces-generative-ai-with-a-premium-on-humans-in-the-loop"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Wayfair cautiously embraces generative AI, with ‘a premium on humans in the loop’ Share on Facebook Share on X Share on LinkedIn Wayfair's Wilko Schulz-Mahlendorf at VentureBeat Transform 2023 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Online furniture retailer Wayfair is embracing the power of generative AI with a thoughtful, measured approach that involves a council of stakeholders and a core thesis to help accelerate business productivity. In a session at today’s VentureBeat Transform 2023, Wilko Schulz-Mahlendorf, head of pricing and marketing science at Wayfair, offered insights into how the company is using generative AI today and what its strategy is to integrate more AI in the future. >> Follow all our VentureBeat Transform 2023 coverage << A key tenet of the Wayfair approach to generative AI is to take a cautious and mindful strategic look at the technology before it is rolled out to production; it’s an approach that also makes sure to include humans. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “At Wayfair, we have exacting standards on quality and we don’t want to put any content in front of our customers or suppliers that could involve hallucinations ,” Schulz-Mahlendorf said. “We have really placed a premium on humans in the loop.” Giving Wayfair employees ‘superpowers’ with generative AI As part of Wayfair’s approach to generative AI the company looked at applications where the technology could augment the workforce’s productivity — what Schulz-Mahlendorf referred to as giving employees “superpowers.” Schulz-Mahlendorf explained that Wayfair identified a few specific tasks where gen AI technology could help its inbound sales and service teams. Those tasks included text summarization, product recommendations, and suggestions to agents for best actions. Wayfair also needs, and writes, a lot of content for its site, a critical area where generative AI is helping. “We’re bounded by the amount of copywriters that we have in terms of how much copy we can actually get out there,” Schulz-Mahlendorf said. “We evaluated a number of different products, including some of the vendors that are here at the conference, to see if we could double or triple the efficiency of our human copywriters.” Generative AI copy generation is not intended to replace humans, but rather to help them be more productive. Schulz-Mahlendorf said that the goal is to generate a first draft that can then be polished and fine-tuned by humans to meet Wayfair’s exacting standards. “There’s a lot of blank space on a lot of websites that were previously left unfilled, where we can now create engaging content for our customers,” Schulz-Mahlendorf said. “It’s not about replacing content that already exists, it’s about putting content in places where we may not have had something before.” How Wayfair decides where to use generative AI Schulz-Mahlendorf said that Wayfair has assembled an internal generative AI council to help evaluate strategy for and potential uses of the technology. He explained that the council is a group of people pulled from different business units across the company. It helps to evaluate terms and conditions of individual technologies, deployment ideas and business value. The council also helps to determine whether Wayfair should buy or build gen AI technologies as the company develops its strategy. Schulz-Mahlendorf emphasized that to date, in terms of vendor technologies, there is no clear winner across the board for all of Wayfair’s use cases. “We’re taking a really pragmatic approach, which is: Let’s think about the cost of each of these licenses, let’s think about what each of these things can do best, and let’s work with vendors and partners that are willing to work with us to customize things down the road,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,491
2,023
"Generative AI is 'everything, everywhere, all at once' in the enterprise, says Mastercard data leader | VentureBeat"
"https://venturebeat.com/ai/generative-ai-is-everything-everywhere-all-at-once-in-the-enterprise-says-mastercard-cdo"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Generative AI is ‘everything, everywhere, all at once’ in the enterprise, says Mastercard data leader Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. What a difference a year makes: At the Women in AI breakfast that kicked off VentureBeat Transform 2023 in San Francisco this morning, Mastercard fellow of data & AI JoAnn Stonier said that right now generative AI in the enterprise is like the Oscar-winning movie — Everything, Everywhere, All at Once. That’s a big shift from last July, when the same Women in AI discussion at Transform focused on predictive AI, governance, minimizing bias and model creation. “It was very much a company and organization-by-organization sport,” said Stonier, who was also on the 2022 panel. But now, instead of looking at generative AI from a risk perspective, everyone began to have FOMO, she explained — fear of missing out and being left behind. Now, with generative AI a team sport, she said, “Every organization is trying to figure out what does it mean to them, the right approach for their enterprise.” >> Follow all our VentureBeat Transform 2023 coverage << VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Emily Roberts, senior vice president, consumer platforms at Capital One and Xiaodi Zhang, vice president, seller experience at eBay, also participated in the Women in AI panel. The opportunity of generative AI is exciting, said Roberts, but cautioned that for Capital One, nothing has changed yet. “The promise of what we could do is so exciting, and there’s so much opportunity, and we’ve been thinking about building continuous learning organizations, the structure in how you’re going to apply this to our thinking,” she said. “In the day-to-day, I run consumer platforms, so nothing has changed yet, but a lot of what we want to be thinking about as product leaders is what this can become over time.” Zhang said that at eBay, the company has been able to introduce generative AI into its listing flow for customers — something they are testing and iterating. “We’ve been pleasantly surprised by their [customers’] reaction,” she said, adding that eBay customers want efficiency but they also appreciate having control over the tools. She suggested companies consider internal hackathons that leverage employee capabilities in generative AI. Stonier added that Mastercard has expanded its AI council to evaluate generative AI tools. “We’re seeing things cluster around knowledge management and customer service and chatbots, even advertising and media services — as well as refining interactive tools for our customers — but [we] are not ready to put out there.” The more important the outcome, she said, there should be a distance between the input and validating the output. “We’re refining what we want to do, but we’re not there yet,” she said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,492
2,023
"Slack’s vision for enterprise AI: Empower ‘everybody to automate’ | VentureBeat"
"https://venturebeat.com/ai/slacks-vision-for-enterprise-ai-empower-everybody-to-automate"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Slack’s vision for enterprise AI: Empower ‘everybody to automate’ Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The messaging software company Slack sees massive potential in generative AI and large language models, allowing more automation to improve workplace productivity and efficiency, said Steve Wood, Slack’s SVP, product management at the VentureBeat Transform 2023 conference on Tuesday. “For me, I think automation, integration and AI are going to have a profound impact on how we experience software going forward,” Wood said in his panel discussion with Brian Evergreen, founder and CEO of the Profitable Good Company, a leadership advisory firm. Launched as a startup in 2013 by former Flickr founder Stewart Butterfield and acquired by Salesforce for nearly $30 billion in 2021 , Slack has embraced automation technologies and leveraged large language models (LLMs) from OpenAI and Anthropic to more efficiently summarize busy channel activity in the instant messaging platform. It also generates new work flows from the contextual information found in a company’s online discussion. >> Follow all our VentureBeat Transform 2023 coverage << VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! After reflecting on how automation appeared in Slack, the platform saw the need to rebuild to “make it friendly for things like AI,” said Wood. Automation for everyone Another shift highlighted by Wood was the move towards “flexible, modular building blocks,” that would allow low- or no-code individuals to automate features of their favorite apps or potentially have developers put elements in place that could be better understood by AI in the future. “I think too many organizations are holding on to automation as a practitioner’s role. And I think we need to open it up and … [empower] everybody to build and automate things, and they may not get it perfectly right,” said Wood. He said that the level of institutional comfort will need to grow to enable a future where AI-enhanced automation tools are accessible to everyone in an organization. Wood said the integration of outside information held in LLMs with the unique data found in the conversations on individual companies’ Slack channels could be key to quickly unlocking bespoke business intelligence for the users of the collaboration tool. AI, underutilized? The ubiquitous corporate chat app released a report in May that outlined three trends that shape the modern workplace and drive employee productivity. The State of Work 2023 survey highlighted the underutilization of new technologies such as AI and automation, the transformation of office work and design in the era of hybrid work, and the direct influence of employee engagement and talent development on productivity. “Today there’s all these pervasive productivity gains through these tools and we just have to let them be discovered,” said Wood. The automation of repetitive tasks is an underused way to improve productivity among teams in the age of hybrid work. While the survey found that people viewed automation as useful, only a small portion of the surveyed companies ended up using the new tools to address these challenges. “It’s something like the average is around five hours being saved a week using AI technology in their work, which translates to a month a year. That’s a non-trivial boost,” said Wood. “So it’s time to rethink software and how we engage with it, for sure.” Wood further opined that he thought the true value of generative AI for Slack and society was yet to be fully realized, and could not be accurately projected yet as it likely involved changing societal behaviors. He compared gen AI to the advent of Uber and ridehail apps, pointing out that that the total addressable market (TAM) wasn’t just people who took taxis, but everyone who changed their behavior once they realized they could order rides easily from their phones. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,493
2,021
"CLIP: Connecting text and images"
"https://openai.com/research/clip"
"Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Illustration: Justin Jay Wang Research CLIP: Connecting text and images We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the “zero-shot” capabilities of GPT-2 and GPT-3. January 5, 2021 More resources Read paper View code Computer vision , Representation learning , Transfer learning , Contrastive learning , Supervised learning , CLIP , Milestone , Publication , Release Although deep learning has revolutionized computer vision, current approaches have several major problems: typical vision datasets are labor intensive and costly to create while teaching only a narrow set of visual concepts; standard vision models are good at one task and one task only, and require significant effort to adapt to a new task; and models that perform well on benchmarks have disappointingly poor performance on stress tests, [^reference-1] [^reference-2] [^reference-3] [^reference-4] casting doubt on the entire deep learning approach to computer vision. We present a neural network that aims to address these problems: it is trained on a wide variety of images with a wide variety of natural language supervision that’s abundantly available on the internet. By design, the network can be instructed in natural language to perform a great variety of classification benchmarks, without directly optimizing for the benchmark’s performance, similar to the “ zero-shot ” capabilities of GPT-2 [^reference-5] and GPT-3. [^reference-6] This is a key change: by not directly optimizing for the benchmark, we show that it becomes much more representative: our system closes this “robustness gap” by up to 75% while matching the performance of the original ResNet-50 [^reference-7] on ImageNet zero-shot without using any of the original 1.28M labeled examples. Background and related work CLIP ( Contrastive Language–Image Pre-training ) builds on a large body of work on zero-shot transfer, natural language supervision, and multimodal learning. The idea of zero-data learning dates back over a decade [^reference-8] but until recently was mostly studied in computer vision as a way of generalizing to unseen object categories. [^reference-9] [^reference-10] A critical insight was to leverage natural language as a flexible prediction space to enable generalization and transfer. In 2013, Richer Socher and co-authors at Stanford [^reference-11] developed a proof of concept by training a model on CIFAR-10 to make predictions in a word vector embedding space and showed this model could predict two unseen classes. The same year DeVISE [^reference-12] scaled this approach and demonstrated that it was possible to fine-tune an ImageNet model so that it could generalize to correctly predicting objects outside the original 1000 training set. Most inspirational for CLIP is the work of Ang Li and his co-authors at FAIR [^reference-13] who in 2016 demonstrated using natural language supervision to enable zero-shot transfer to several existing computer vision classification datasets, such as the canonical ImageNet dataset. They achieved this by fine-tuning an ImageNet CNN to predict a much wider set of visual concepts (visual n-grams) from the text of titles, descriptions, and tags of 30 million Flickr photos and were able to reach 11.5% accuracy on ImageNet zero-shot. Finally, CLIP is part of a group of papers revisiting learning visual representations from natural language supervision in the past year. This line of work uses more modern architectures like the Transformer [^reference-32] and includes VirTex, [^reference-33] which explored autoregressive language modeling, ICMLM, [^reference-34] which investigated masked language modeling, and ConVIRT, [^reference-35] which studied the same contrastive objective we use for CLIP but in the field of medical imaging. Approach We show that scaling a simple pre-training task is sufficient to achieve competitive zero-shot performance on a great variety of image classification datasets. Our method uses an abundantly available source of supervision: the text paired with images found across the internet. This data is used to create the following proxy training task for CLIP: given an image, predict which out of a set of 32,768 randomly sampled text snippets, was actually paired with it in our dataset. In order to solve this task, our intuition is that CLIP models will need to learn to recognize a wide variety of visual concepts in images and associate them with their names. As a result, CLIP models can then be applied to nearly arbitrary visual classification tasks. For instance, if the task of a dataset is classifying photos of dogs vs cats we check for each image whether a CLIP model predicts the text description “a photo of a dog ” or “a photo of a cat ” is more likely to be paired with it. CLIP was designed to mitigate a number of major problems in the standard deep learning approach to computer vision: Costly datasets : Deep learning needs a lot of data, and vision models have traditionally been trained on manually labeled datasets that are expensive to construct and only provide supervision for a limited number of predetermined visual concepts. The ImageNet dataset, one of the largest efforts in this space, required over 25,000 workers to annotate 14 million images for 22,000 object categories. In contrast, CLIP learns from text–image pairs that are already publicly available on the internet. Reducing the need for expensive large labeled datasets has been extensively studied by prior work, notably self-supervised learning, [^reference-14] [^reference-15] [^reference-16] contrastive methods, [^reference-17] [^reference-18] [^reference-19] [^reference-20] [^reference-21] self-training approaches, [^reference-22] [^reference-23] and generative modeling. [^reference-24] [^reference-25] [^reference-26] [^reference-27] Narrow : An ImageNet model is good at predicting the 1000 ImageNet categories, but that’s all it can do “out of the box.” If we wish to perform any other task, an ML practitioner needs to build a new dataset, add an output head, and fine-tune the model. In contrast, CLIP can be adapted to perform a wide variety of visual classification tasks without needing additional training examples. To apply CLIP to a new task, all we need to do is “tell” CLIP’s text-encoder the names of the task’s visual concepts, and it will output a linear classifier of CLIP’s visual representations. The accuracy of this classifier is often competitive with fully supervised models. We show random, non-cherry picked, predictions of zero-shot CLIP classifiers on examples from various datasets below. Loading data… Poor real-world performance : Deep learning systems are often reported to achieve human or even superhuman performance [^reference-28] [^footnote-1] on vision benchmarks, yet when deployed in the wild, their performance can be far below the expectation set by the benchmark. In other words, there is a gap between “benchmark performance” and “real performance.” We conjecture that this gap occurs because the models “cheat” by only optimizing for performance on the benchmark, much like a student who passed an exam by studying only the questions on past years’ exams. In contrast, the CLIP model can be evaluated on benchmarks without having to train on their data, so it can’t “cheat” in this manner. This results in its benchmark performance being much more representative of its performance in the wild. To verify the “cheating hypothesis”, we also measure how CLIP’s performance changes when it is able to “study” for ImageNet. When a linear classifier is fitted on top of CLIP’s features, it improves CLIP’s accuracy on the ImageNet test set by almost 10%. However, this classifier does no better on average across an evaluation suite of 7 other datasets measuring “robust” performance. [^reference-30] Key takeaways 1. CLIP is highly efficient CLIP learns from unfiltered, highly varied, and highly noisy data, and is intended to be used in a zero-shot manner. We know from GPT-2 and 3 that models trained on such data can achieve compelling zero shot performance; however, such models require significant training compute. To reduce the needed compute, we focused on algorithmic ways to improve the training efficiency of our approach. We report two algorithmic choices that led to significant compute savings. The first choice is the adoption of a contrastive objective for connecting text with images. [^reference-31] [^reference-17] [^reference-35] We originally explored an image-to-text approach, similar to VirTex, [^reference-33] but encountered difficulties scaling this to achieve state-of-the-art performance. In small to medium scale experiments, we found that the contrastive objective used by CLIP is 4x to 10x more efficient at zero-shot ImageNet classification. The second choice was the adoption of the Vision Transformer, [^reference-36] which gave us a further 3x gain in compute efficiency over a standard ResNet. In the end, our best performing CLIP model trains on 256 GPUs for 2 weeks which is similar to existing large scale image models. [^reference-37] [^reference-23] [^reference-38] [^reference-36] 2. CLIP is flexible and general Because they learn a wide range of visual concepts directly from natural language, CLIP models are significantly more flexible and general than existing ImageNet models. We find they are able to zero-shot perform many different tasks. To validate this we have measured CLIP’s zero-shot performance on over 30 different datasets including tasks such as fine-grained object classification, geo-localization, action recognition in videos, and OCR. [^footnote-2] In particular, learning OCR is an example of an exciting behavior that does not occur in standard ImageNet models. Above, we visualize a random non-cherry picked prediction from each zero-shot classifier. This finding is also reflected on a standard representation learning evaluation using linear probes. The best CLIP model outperforms the best publicly available ImageNet model, the Noisy Student EfficientNet-L2, [^reference-23] on 20 out of 26 different transfer datasets we tested. CLIP-ViT Instagram ViT (ImageNet-21k) CLIP-ResNet SimCLRv2 BiT-M EfficientNet-NoisyStudent BYOL BiT-S EfficientNet MoCo ResNet Across a suite of 27 datasets measuring tasks such as fine-grained object classification, OCR, activity recognition in videos, and geo-localization, we find that CLIP models learn more widely useful image representations. CLIP models are also more compute efficient than the models from 10 prior approaches that we compare with. Limitations While CLIP usually performs well on recognizing common objects, it struggles on more abstract or systematic tasks such as counting the number of objects in an image and on more complex tasks such as predicting how close the nearest car is in a photo. On these two datasets, zero-shot CLIP is only slightly better than random guessing. Zero-shot CLIP also struggles compared to task specific models on very fine-grained classification, such as telling the difference between car models, variants of aircraft, or flower species. CLIP also still has poor generalization to images not covered in its pre-training dataset. For instance, although CLIP learns a capable OCR system, when evaluated on handwritten digits from the MNIST dataset, zero-shot CLIP only achieves 88% accuracy, well below the 99.75% of humans on the dataset. Finally, we’ve observed that CLIP’s zero-shot classifiers can be sensitive to wording or phrasing and sometimes require trial and error “prompt engineering” to perform well. Broader impacts CLIP allows people to design their own classifiers and removes the need for task-specific training data. The manner in which these classes are designed can heavily influence both model performance and model biases. For example, we find that when given a set of labels including Fairface [^reference-39] race labels [^footnote-3] and a handful of egregious terms such as “criminal”, “animal,” etc., the model tends to classify images of people aged 0–20 in the egregious category at a rate of ~32.3%. However, when we add the class “child” to the list of possible classes, this behaviour drops to ~8.7%. Additionally, given that CLIP does not need task-specific training data it can unlock certain niche tasks with greater ease. Some of these tasks may raise privacy or surveillance related risks and we explore this concern by studying the performance of CLIP on celebrity identification. CLIP has a top-1 accuracy of 59.2% for “in the wild” celebrity image classification when choosing from 100 candidates and a top-1 accuracy of 43.3% when choosing from 1000 possible choices. Although it’s noteworthy to achieve these results with task agnostic pre-training, this performance is not competitive when compared to widely available production level models. We further explore challenges that CLIP poses in our paper and we hope that this work motivates future research on the characterization of the capabilities, shortcomings, and biases of such models. We are excited to engage with the research community on such questions. Conclusion With CLIP, we’ve tested whether task agnostic pre-training on internet scale natural language, which has powered a recent breakthrough in NLP, can also be leveraged to improve the performance of deep learning for other fields. We are excited by the results we’ve seen so far applying this approach to computer vision. Like the GPT family, CLIP learns a wide variety of tasks during pre-training which we demonstrate via zero-shot transfer. We are also encouraged by our findings on ImageNet that suggest zero-shot evaluation is a more representative measure of a model’s capability. Authors Alec Radford Ilya Sutskever Jong Wook Kim Gretchen Krueger Sandhini Agarwal Acknowledgments We’d like to thank the millions of people involved in creating the data CLIP is trained on. We also are grateful to all our co-authors for their contributions to the project. Finally, we’d like to thank Jeff Clune, Miles Brundage, Ryan Lowe, Jakub Pachocki, and Vedant Misra for feedback on drafts of this blog and Matthew Knight for reviewing the code release. Design & Cover Artwork Justin Jay Wang Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top "
14,494
2,022
"Neuro-symbolic AI could provide machines with common sense | VentureBeat"
"https://venturebeat.com/ai/neuro-symbolic-ai-could-provide-machines-with-common-sense"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Neuro-symbolic AI could provide machines with common sense Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence research has made great achievements in solving specific applications, but we’re still far from the kind of general-purpose AI systems that scientists have been dreaming of for decades. Among the solutions being explored to overcome the barriers of AI is the idea of neuro-symbolic systems that bring together the best of different branches of computer science. In a talk at the IBM Neuro-Symbolic AI Workshop, Joshua Tenenbaum, professor of computational cognitive science at the Massachusetts Institute of Technology, explained how neuro-symbolic systems can help to address some of the key problems of current AI systems. Among the many gaps in AI, Tenenbaum is focused on one in particular: “How do we go beyond the idea of intelligence as recognizing patterns in data and approximating functions and more toward the idea of all the things the human mind does when you’re modeling the world, explaining and understanding the things you’re seeing, imagining things that you can’t see but could happen, and making them into goals that you can achieve by planning actions and solving problems?” Admittedly, that is a big gap, but bridging it starts with exploring one of the fundamental aspects of intelligence that humans and many animals share: intuitive physics and psychology. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Intuitive physics and psychology Our minds are built not just to see patterns in pixels and soundwaves but to understand the world through models. As humans, we start developing these models as early as three months of age, by observing and acting in the world. We break down the world into objects and agents, and interactions between these objects and agents. Agents have their own goals and their own models of the world (which might be different from ours). For example, multiple studies by researchers Felix Warneken and Michael Tomasello show that children develop abstract ideas about the physical world and other people and apply them in novel situations. For example, in the following video, through observation alone, the child realizes that the person holding the objects has a goal in mind and needs help with opening the door to the closet. These capabilities are often referred to as “intuitive physics” and “intuitive psychology” or “theory of mind,” and they are at the heart of common sense. “These systems develop quite early in the brain architecture that is to some extent shared with other species,” Tenenbaum says. These cognitive systems are the bridge between all the other parts of intelligence such as the targets of perception, the substrate of action-planning, reasoning, and even language. AI agents should be able to reason and plan their actions based on mental representations they develop of the world and other agents through intuitive physics and theory of mind. Neuro-symbolic architecture Tenenbaum lists three components required to create the core for intuitive physics and psychology in AI. “We emphasize a three-way interaction between neural, symbolic, and probabilistic modeling and inference,” Tenenbaum says. “We think that it’s that three-way combination that is needed to capture human-like intelligence and core common sense.” The symbolic component is used to represent and reason with abstract knowledge. The probabilistic inference model helps establish causal relations between different entities, reason about counterfactuals and unseen scenarios, and deal with uncertainty. And the neural component uses pattern recognition to map real-world sensory data to knowledge and to help navigate search spaces. “We’re trying to bring together the power of symbolic languages for knowledge representation and reasoning as well as neural networks and the things that they’re good at, but also with the idea of probabilistic inference, especially Bayesian inference or inverse inference in a causal model for reasoning backwards from the things we can observe to the things we want to infer, like the underlying physics of the world, or the mental states of agents,” Tenenbaum says. The game engine in the head One of the key components in Tenenbaum’s neuro-symbolic AI concept is a physics simulator that helps predict the outcome of actions. Physics simulators are quite common in game engines and different branches of reinforcement learning and robotics. But unlike other branches of AI that use simulators to train agents and transfer their learnings to the real world, Tenenbaum’s idea is to integrate the simulator into the agent’s inference and reasoning process. “That’s why we call it the game engine in the head,” he says. The physics engine will help the AI simulate the world in real-time and predict what will happen in the future. The simulation just needs to be reasonably accurate and help the agent choose a promising course of action. This is similar to how the human mind works as well. When we look at an image, such as a stack of blocks, we will have a rough idea of whether it will resist gravity or topple. Or if we see a set of blocks on a table and are asked what will happen if we give the table a sudden bump, we can roughly predict which blocks will fall. We might not be able to predict the exact trajectory of each object, but we develop a high-level idea of the outcome. When combined with a symbolic inference system, the simulator can be configurated to test various possible simulations at a very fast rate. Approximating 3D scenes While simulators are a great tool, one of their big challenges is that we don’t perceive the world in terms of three-dimensional objects. The neuro-symbolic system must detect the position and orientation of the objects in the scene to create an approximate 3D representation of the world. There are several attempts to use pure deep learning for object position and pose detection, but their accuracy is low. In a joint project, MIT and IBM created “ 3D Scene Perception via Probabilistic Programming ” (3DP3), a system that resolves many of the errors that pure deep learning systems fall into. 3DP3 takes an image and tries to explain it through 3D volumes that capture each object. It feeds the objects into a symbolic scene graph that specifies the contact and support relations between them. And then it tries to reconstruct the original image and depth map to compare against the ground truth. Thinking about solutions Once the neuro-symbolic agent has a physics engine to model the world, it should be able to develop concepts that enable it to act in novel ways. For example, people (and sometimes animals) can learn to use a new tool to solve a problem or figure out how to repurpose a known object for a new goal (e.g., use a rock instead of a hammer to drive in a nail). For this, Tenenbaum and his colleagues developed a physics simulator in which people would have to use objects to solve problems in novel ways. The same engine was used to train AI models to develop abstract concepts about using objects. “What’s important is to develop higher-level strategies that might transfer in new situations. This is where the symbolic approach becomes key,” Tenenbaum says. For example, people can use abstract concepts such as “hammer” and “catapult” and use them to solve different problems. “People can form these abstract concepts and transfer them to near and far situations. We can model this through a program that can describe these concepts symbolically,” Tenenbaum says. In one of their projects, Tenenbaum and his AI system was able to parse a scene and use a probabilistic model that produce a step-by-step set of symbolic instructions to solve physics problems. For example, to throw an object placed on a board, the system was able to figure out that it had to find a large object, place it high above the opposite end of the board, and drop it to create a catapult effect. Physically grounded language Until now, while we talked a lot about symbols and concepts, there was no mention of language. Tenenbaum explained in his talk that language is deeply grounded in the unspoken common-sense knowledge that we acquire before we learn to speak. Intuitive physics and theory of mind are missing from current natural language processing systems. Large language models, the currently popular approach to natural language processing and understanding, tries to capture relevant patterns between sequences of words by examining very large corpora of text. While this method has produced impressive results, it also has limits when it comes to dealing with things that are not represented in the statistical regularities of words and sentences. “There have been tremendous advances in large language models, but because they don’t have a grounding in physics and theory of mind, in some ways they are quite limited,” Tenenbaum says. “And you can see this in their limits in understanding symbolic scenes. They also don’t have a sense of physics. Verbs often refer to causal structures. You have to be able to capture counterfactuals and they have to be probabilistic if you want to make judgments.” The building blocks of common sense So far, many of the successful approaches in neuro-symbolic AI provide the models with prior knowledge of intuitive physics such as dimensional consistency and translation invariance. One of the main challenges that remain is how to design AI systems that learn these intuitive physics concepts as children do. The learning space of physics engines is much more complicated than the weight space of traditional neural networks, which means that we still need to find new techniques for learning. Tenenbaum also discusses the way humans develop building blocks of knowledge in a paper titled “ The Child as a Hacker. ” In the paper, Tenenbaum and his co-authors use programming as an example of how humans explore solutions across different dimensions such as accuracy, efficiency, usefulness, modularity, etc. They also discuss how humans gather bits of information, develop them into new symbols and concepts and then learn to combine them together to form new concepts. These directions of research might help crack the code of common sense in neuro-symbolic AI. “We want to provide a roadmap of how to achieve the vision of thinking about what is it that makes human common sense distinctive and powerful from the very beginning,” Tenenbaum says. “In a sense, it is one of AI’s oldest dreams, going back to Alan Turing’s original proposal for intelligence as computation and the idea that we might build a machine that achieves human-level intelligence by starting like a baby and teaching it like a child. This has been inspirational for a number of us and what we’re trying to do is come up with the building blocks for that.” Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,495
2,023
"Anthropic expands Claude AI availability, but still no Canada | VentureBeat"
"https://venturebeat.com/ai/anthropic-brings-claude-ai-to-more-countries-but-still-no-canada-for-now"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Anthropic brings Claude AI to more countries, but still no Canada (for now) Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Anthropic, the “ Constitutional AI ” foundation model startup from San Francisco that is perhaps the foremost rival to OpenAI , made a big move this week, bringing its Claude 2 large language model (LLM) chatbot to 95 countries total , including: Albania Algeria Antigua and Barbuda Argentina Australia Bahamas Bangladesh Barbados Belize Benin Bhutan Bolivia Botswana Cape Verde Chile Colombia Congo Costa Rica Dominica Dominican Republic East Timor Ecuador El Salvador Fiji Gambia Georgia Ghana Guatemala Guinea-Bissau Guyana Honduras India Indonesia Israel Ivory Coast Jamaica Japan Kenya Kiribati Kuwait Lebanon Lesotho Liberia Madagascar Malawi Malaysia Maldives Marshall Islands Mauritius Mexico Micronesia Mongolia Mozambique Namibia Nauru Nepal New Zealand Niger Nigeria Oman Palau Palestine Panama Papua New Guinea Paraguay Peru Philippines Qatar Rwanda Saint Kitts and Nevis Saint Lucia Saint Vincent and the Grenadines Samoa São Tomé and Príncipe Senegal Seychelles Sierra Leone Singapore Solomon Islands South Africa South Korea Sri Lanka Suriname Taiwan Thailand Tonga Trinidad and Tobago Tuvalu Ukraine United Arab Emirates United Kingdom United States Uruguay Vanuatu Zambia “We’re rolling out access to Claude.ai to more people around the world,” the company posted on X (formerly Twitter ). “Starting today, users in 95 countries can talk to Claude and get help with their professional or day-to-day tasks…Since launching in July, millions of users have leveraged Claude’s expansive memory, 100K token context window and file upload feature. Claude has helped them analyze data, improve their writing and even talk to books and research papers.” However, among those nations conspicuously left off the list was the home of some of VentureBeat’s own contributors: Canada. Canada remains elusive for some AI applications Interestingly, Google’s Bard AI chatbot is also not yet available in the hockey-loving, beaver-ridden country, though OpenAI’s ChatGPT is. And of course, Toronto AI startup Cohere is also based in the country. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Asked by VentureBeat about why Anthropic had not yet brought Claude 2 to Canada, and if it was in the works, a spokesperson responded via email to say: “I can share that the team is diligently working to make Claude available in Canada as soon as possible.” That’s in line with the company’s message on X as well, where it posted : “We’re working hard to responsibly expand availability over the coming months—and will have more to share soon.” The lack of a definitive reason and timeline may be cold comfort to our Canadian contributors, but at least there is something to look forward to. Some contributors reported that Poe , the AI model aggregator subscription service from Quora , did allow them to access Claude 2 from Canada. Canadian politicians have taken a hard line towards AI regulations , so their tough talk may be ensuring more due diligence from U.S. AI companies looking to expand there. Anthropic recently secured an up to $4 billion commitment from Amazon and another $100 million from South Korea Telecom (SKT) , so it certainly has no shortage of cash to help it along its quest. The company has also earned praise from users for Claude 2’s ability to parse PDFs. However, as of now, it lacks some of the image generation and multimodal/audio/video / web browsing features of ChatGPT. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,496
2,022
"Yann LeCun’s vision for creating autonomous machines | VentureBeat"
"https://venturebeat.com/ai/yann-lecuns-vision-for-creating-autonomous-machines"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Yann LeCun’s vision for creating autonomous machines Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Editor’s note : This story has been updated to reflect that Yann LeCun’s work touches on ML research previously conducted by German computer scientist Jürgen Schmidhuber. In the midst of the heated debate about AI sentience , conscious machines and artificial general intelligence, Yann LeCun , chief AI scientist at Meta, published a blueprint for creating “autonomous machine intelligence.” LeCun has compiled his ideas in a paper that draws inspiration from progress in machine learning, robotics, neuroscience and cognitive science. It examines some ML work by German computer scientist and AI professor Jürgen Schmidhuber between 1990 and 2015. LeCun lays out a roadmap for creating AI that can model and understand the world, reason and plan to do tasks on different timescales. While the paper is not a scholarly document , as pointed out by several others in the field, it does provide an interesting framework for thinking about the different pieces needed to replicate animal and human intelligence. It also shows how the mindset of LeCun, an award-winning pioneer of deep learning , has changed and why he thinks current approaches to AI will not get us to human-level AI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A modular structure One element of LeCun’s vision is a modular structure of different components inspired by various parts of the brain. This is a break from the popular approach in deep learning, where a single model is trained end to end. At the center of the architecture is a world model that predicts the states of the world. While modeling the world has been discussed and attempted in different AI architectures, they are task-specific and can’t be adapted to different tasks. LeCun suggests that like humans and animals, autonomous systems must have a single flexible world model. “One hypothesis in this paper is that animals and humans have only one world model engine somewhere in their prefrontal cortex,” LeCun writes. “That world model engine is dynamically configurable for the task at hand. With a single, configurable world model engine, rather than a separate model for every situation, knowledge about how the world works may be shared across tasks. This may enable reasoning by analogy, by applying the model configured for one situation to another situation.” The world model is complemented by several other modules that help the agent understand the world and take actions that are relevant to its goals. The “perception” module performs the role of the animal sensory system, collecting information from the world and estimating its current state with the help of the world model. In this regard, the world model performs two important tasks: First, it fills the missing pieces of information in the perception module (e.g., occluded objects), and second, it predicts the plausible future states of the world (e.g., where will the flying ball be in the next time step). The “cost” module evaluates the agent’s “discomfort,” measured in energy. The agent must take actions that reduce its discomfort. Some of the costs are hardwired, or “intrinsic costs.” For example, in humans and animals, these costs would be hunger, thirst, pain, and fear. Another submodule is the “trainable critic,” whose goal is to reduce the costs of achieving a particular goal, such as navigating to a location, building a tool, etc. The “short-term memory” module stores relevant information about the states of the world across time and the corresponding value of the intrinsic cost. Short-term memory plays an important role in helping the world model function properly and make accurate predictions. The “actor” module turns predictions into specific actions. It gets its input from all other modules and controls the outward behavior of the agent. Finally, a “configurator” module takes care of executive control, adjusting all other modules, including the world model, for the specific task that it wants to carry out. This is the key module that makes sure a single architecture can handle many different tasks. It adjusts the perception model, world model, cost function and actions of the agent based on the goal it wants to achieve. For example, if you’re looking for a tool to drive in a nail, your perception module should be configured to look for items that are heavy and solid, your actor module must plan actions to pick up the makeshift hammer and use it to drive the nail, and your cost module must be able to calculate whether the object is wieldy and near enough or you should be looking for something else that is within reach. Interestingly, in his proposed architecture, LeCun considers two modes of operation, inspired by Daniel Kahneman’s “ Thinking Fast and Slow ” dichotomy. The autonomous agent should have a “Mode 1” operating model, a fast and reflexive behavior that directly links perceptions to actions, and a “Mode 2” operating model, which is slower and more involved and uses the world model and other modules to reason and plan. Self-supervised learning While the architecture that LeCun proposes is interesting, implementing it poses several big challenges. Among them is training all the modules to perform their tasks. In his paper, LeCun makes ample use of the terms “differentiable,” “gradient-based” and “optimization,” all of which indicate that he believes that the architecture will be based on a series of deep learning models as opposed to symbolic systems in which knowledge has been embedded in advance by humans. LeCun is a proponent of self-supervised learning , a concept he has been talking about for several years. One of the main bottlenecks of many deep learning applications is their need for human-annotated examples, which is why they are called “supervised learning” models. Data labeling doesn’t scale, and it is slow and expensive. On the other hand, unsupervised and self-supervised learning models learn by observing and analyzing data without the need for labels. Through self-supervision, human children acquire commonsense knowledge of the world, including gravity, dimensionality and depth, object persistence and even things like social relationships. Autonomous systems should also be able to learn on their own. Recent years have seen some major advances in unsupervised learning and self-supervised learning, mainly in transformer models , the deep learning architecture used in large language models. Transformers learn the statistical relations of words by masking parts of a known text and trying to predict the missing part. One of the most popular forms of self-supervised learning is “ contrastive learning ,” in which a model is taught to learn the latent features of images through masking, augmentation, and exposure to different poses of the same object. However, LeCun proposes a different type of self-supervised learning, which he describes as “energy-based models.” EBMs try to encode high-dimensional data such as images into low-dimensional embedding spaces that only preserve the relevant features. By doing so, they can compute whether two observations are related to each other or not. In his paper, LeCun proposes the “Joint Embedding Predictive Architecture” (JEPA), a model that uses EBM to capture dependencies between different observations. “A considerable advantage of JEPA is that it can choose to ignore the details that are not easily predictable ,” LeCun writes. Basically, this means that instead of trying to predict the world state at the pixel level, JEPA predicts the latent, low-dimensional features that are relevant to the task at hand. In the paper, LeCun further discusses Hierarchical JEPA (H-JEPA), a plan to stack JEPA models on top of each other to handle reasoning and planning at different time scales. “The capacity of JEPA to learn abstractions suggests an extension of the architecture to handle prediction at multiple time scales and multiple levels of abstraction,” LeCun writes. “Intuitively, low-level representations contain a lot of details about the input, and can be used to predict in the short term. But it may be difficult to produce accurate long-term predictions with the same level of detail. Conversely high-level, abstract representation may enable long-term predictions, but at the cost of eliminating a lot of details.” The road to autonomous agents In his paper, LeCun admits that many things remain unanswered, including configuring the models to learn the optimal latent features and a precise architecture and function for the short-term memory module and its beliefs about the world. LeCun also says that the configurator module still remains a mystery and more work needs to be done to make it work correctly. But LeCun clearly states that current proposals for reaching human-level AI will not work. For example, one argument that has gained much traction in recent months is that of “it’s all about scale.” Some scientists suggest that by scaling transformer models with more layers and parameters and training them on bigger datasets, we’ll eventually reach artificial general intelligence. LeCun refutes this theory, arguing that LLMs and transformers work as long as they are trained on discrete values. “This approach doesn’t work for high-dimensional continuous modalities, such as video. To represent such data, it is necessary to eliminate irrelevant information about the variable to be modeled through an encoder, as in the JEPA,” he writes. Another theory is “ reward is enough ,” proposed by scientists at DeepMind. According to this theory, the right reward function and correct reinforcement learning algorithm are all you need to create artificial general intelligence. But LeCun argues that while RL requires the agent to constantly interact with its environment, much of the learning that humans and animals do is through pure perception. LeCun also refutes the hybrid “ neuro-symbolic ” approach, saying that the model probably won’t need explicit mechanisms for symbol manipulation, and describes reasoning as “energy minimization or constraint satisfaction by the actor using various search methods to find a suitable combination of actions and latent variables.” Much more needs to happen before LeCun’s blueprint becomes a reality. “It is basically what I’m planning to work on, and what I’m hoping to inspire others to work on, over the next decade,” he wrote on Facebook after he published the paper. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,497
2,022
"The challenge of making data science zero-trust | VentureBeat"
"https://venturebeat.com/2022/05/05/the-challenge-of-making-data-science-zero-trust"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The challenge of making data science zero-trust Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. On March 21, President Biden warned of cyberattacks from Russia and reiterated the need to improve the state of domestic cybersecurity. We live in a world where adversaries have many ways to infiltrate our systems. As a result, today’s security professionals need to act under the premise that no part of a network should be trusted. Malicious actors increasingly have free reign in cyberspace, so failure must be presumed at each node. This is known as a ‘ zero trust ’ architecture. In the digital world, in other words, we must now presume the enemy is everywhere and act accordingly. A recent executive order from the Biden administration specifically calls for a zero-trust approach to securing the United States government’s data, building on the Department of Defense’s own zero-trust strategy released earlier this year. The digital world is now so fundamentally insecure that a zero-trust strategy is warranted anywhere computing is taking place — with one exception: data science. It is not yet possible to accept the tenets of zero trust while also enabling data science activities and the AI systems they give rise to. This means that just as calls for the use of AI are growing , so too is the gap between the demands of cybersecurity and an organization’s ability to invest in data science and AI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Finding a way to apply evolving security practices to data science has become the most pressing policy issue in the world of technology. The problem with zero trust for data Data science rests on human judgment, which is to say that in the process of creating analytic models, someone, somewhere must be trusted. How else can we take large volumes of data, assess the value of the data, clean and transform the data, and then build models based on the insights the data hold? If we were to completely remove any trusted actors from the lifecycle of analytic modeling, as is the logical conclusion of the zero-trust approach, that lifecycle would collapse — there would be no data scientist to engage in the modeling. In practice, data scientists spend only about 20% of their time engaged in what might be considered “data science.” The other 80% of their time is spent on more painstaking activities such as evaluating, cleaning, and transforming raw datasets to make data ready for modeling — a process that, collectively, is referred to as “data munging.” Data munging is at the heart of all analytics. Without munging, there are no models. And without trust, there can be no munging. Munging requires raw access to data, it requires the ability to change that data in a variety of unpredictable ways, and it frequently requires unconstrained time spent with the raw data itself. Now, compare the requirements of munging to the needs of zero trust. Here, for example, is how the National Institute of Standards and Technology (NIST) describes the process of implementing zero trust in practice: …protections usually involve minimizing access to resources (such as data and compute resources and applications/services) to only those subjects and assets identified as needing access as well as continually authenticating and authorizing the identity and security posture of each access request… By this description, for zero trust to work, every request to access data must be individually and continually authenticated (“does the right person require the right access to the data?”) and authorized (“should the requested access be granted or not?”). In practice, this is akin to inserting administrative oversight between a writer and their keyboard, reviewing and approving every key before it is punched. Put more simply, the need to munge — to engage in pure, unadulterated access to raw data — undermines every basic requirement of zero trust. So, what to do? Zero trust for data science There are three fundamental tenets that can help to realign the emerging requirements of zero trust to the needs of data science: minimization, distributed data, and high observability. We start with minimization , a concept already embedded into a host of data protection laws and regulations and a longstanding principle within the information security community. The principle of minimization mandates that no more data is ever accessible than is needed for specific tasks. This ensures that if a breach does occur, there are some limits to how much data is exposed. If we think in terms of “attack surfaces,” minimization ensures that the attack surface is as shallow as possible — any successful attack is brunted because, even once successful, the attacker will not have access to all the underlying data, only some of it. This means that before data scientists engage with raw data, they should justify how much data and in what form they need it. Do they need full social security numbers? Rarely. Do they need full birth dates? Sometimes. Hashing, or other basic anonymization or pseudonymization practices, should be applied as widely as possible as a baseline defensive measure. Ensuring that basic minimization practices are applied to the data will serve to blunt the impact of any successful attack, constituting the first and best way to apply zero trust to data science. There are times when minimization might not be possible, given the needs of the data scientist and their use case. At times in the healthcare and life sciences space, for example, there is no way around using patient or diagnostic data for modeling. In this case, the following two tenets are even more important. The tenet of distributed data requires the decentralized storage of data to limit the impact of any one breach. If minimization keeps the attack surface shallow, distributed data ensures that the surface is as wide as possible, increasing the time and resource costs required for any successful attack. For example, while a variety of departments and agencies in the US government have been subject to massive hacks, one organization has not: Congress. This is not because the First Branch itself has mastered the nuances of cybersecurity better than its peers but simply because there is no such thing as “Congress” from a cybersecurity perspective. Each of its 540-plus offices manages its own IT resources separately, meaning an intruder would need to successfully hack into hundreds of separate environments rather than just one. As Dan Geer warned nearly two decades ago , diversity is among the best protections for single-source failures. The more distributed the data, the harder it will be to centralize and therefore compromise, and the more protected it will be over time. However, a warning: Diverse computing environments are complex, and complexity itself is costly in terms of time and resources. Embracing this type of diversity in many ways cuts against the trend towards the adoption of single cloud compute environments, which are designed to simplify IT needs and move organizations away from a siloed approach to data. Data mesh architectures are helping to make it possible to retain decentralized architecture while unifying access to data through a single data access layer. However, some limits on distributed data might be warranted in practice. And this brings us to our last point: high observability. High observability is the monitoring of as many activities in cyberspace as is possible, enough to be able to form a compelling baseline for what counts as “normal” behavior so that meaningful deviations from this baseline can be spotted. This can be applied at the data layer, tracking what the underlying data looks like and how it might be changing over time. It can be applied to the query layer, understanding how and when the data is being queried, for what reason, and what each individual query looks like. And it can be applied to the user layer, understanding which individual users are accessing the data and when, and monitoring these elements both in real-time and during audits. At a basic level, some data scientists, somewhere, must be fully trusted if they are to successfully do their job, and observability is the last and best defense organizations have to secure their data, ensuring that any compromise is detected even if it cannot be prevented. Note that observability is only protective in layers. Organizations must track each layer and their interactions to fully understand their threat environment and to protect their data and analytics. For example, anomalous activity at the query layer might be reasonable in light of the user activity (is it the user’s first day on the job?) or due to changes to the data itself (did the data drift so significantly that a more expansive query was needed to determine how the data changed?). Only by understanding how changes and patterns at each layer interact can organizations develop a sufficiently broad understanding of their data to implement a zero-trust approach while enabling data science in practice. What next? Adopting a zero-trust approach to data science environments is admittedly far from straightforward. To some, applying the tenets of minimization, distributed data, and high observability to these environments might seem impossible, at least in practice. But if you don’t take steps to secure your data science environment, the difficulties of applying zero trust to that environment will only become more acute over time, rendering entire data science programs and AI systems fundamentally insecure. This means that now is the time to get started, even if the path forward is not yet fully clear. Matthew Carroll is CEO of Immuta. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,498
2,022
"McKinsey report: Two AI trends top 2022 outlook | VentureBeat"
"https://venturebeat.com/ai/mckinsey-report-two-ai-trends-top-2022-outlook"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages McKinsey report: Two AI trends top 2022 outlook Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. McKinsey’s newly-released Technology Trends Outlook 2022 named applied AI and industrializing machine learning as two of 14 of the most significant technology trends unfolding today. According to McKinsey, the study builds on trend research shared in 2021, adding new data and deeper analysis and examining “such tangible, quantitative factors as investment, research activity, and news coverage to gauge the momentum of each trend.” Applied AI tops list with maturity and innovation Applied AI , considered by McKinsey as based on proven and mature technologies, scored highest of all 14 trends on quantitative measures of innovation, interest and investment, with viable applications in more industries and closer to a state of mainstream adoption than other trends. In a 2021 McKinsey Global Survey on the state of AI, 56% of respondents said their organizations had adopted AI, up from 50% in the 2020 survey. According to the 2022 report, tech industries are leading in AI adoption, while product development and service operations are the business functions that have seen the most benefits from applied AI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Roger Roberts, partner at McKinsey and one of the report’s coauthors, said of applied AI, which is defined “quite broadly” in the report, “We see things moving from advanced analytics towards… putting machine learning to work on large-scale datasets in service of solving a persistent problem in a novel way,” he said. That move is reflected in an explosion of publication around AI, not just because AI scientists are publishing more, but because people in a range of domains are using AI in their research and pushing the application of AI forward, he explained. “There is really that path from science, to engineering, to scale,” he said. “We see AI moving quite quickly down that path, and what I’m really excited about is the fact that more things are moving from engineering to scale.” However, the McKinsey report also highlighted a variety of key uncertainties that could affect the future of applied AI, including the availability of talent and funding, cybersecurity concerns and questions from stakeholders about the responsible and trustworthy use of AI. McKinsey says industrializing AI is a growing trend According to the McKinsey report, industrializing machine learning (ML) “involves creating an interoperable stack of technical tools for automating ML and scaling up its use so that organizations can realize its full potential.” The report noted that McKinsey expects industrializing ML to spread as more companies seek to use AI for a growing number of applications. “It does encompass MLops , but it extends more fully to include the way to think of the technology stack that supports scaling, which can get down to innovations at the microprocessor level,” said Roberts. “You’re seeing lots of new capabilities in silicon that support the acceleration of particular classes of AI work, and those innovations will move into broader use, allowing for faster and more efficient scaling both in terms of computing resources, but also more sustainability.” The report cites software solutions corresponding to the ML workflow, including data management, model development, model deployment and live model operations. It also includes integrated hardware and heterogeneous computing used in ML workflow operations. Roberts added that he sees big tech organizations such as Google, Meta and Microsoft as in the lead on industrialized ML “by a longshot.” But he predicted the trend would soon make its way well beyond those companies: “We’ll start to see more and more venture activity and corporate investment as we build that tool chain for this new class of software and this new class of product as productized services,” he explained. McKinsey predicts continued AI momentum Roberts emphasized that in his view, economic issues won’t change AI’s powerful momentum. “There’s never been a better time to be leading the application of AI to exciting business problems,” he said. “I think there’s enough momentum and capability flowing along the path of science to engineering to scale.” He did add, however, that within industries there may be some growing separation of leaders and laggards. “Leaders will continue to make the right investments in talent tooling and capabilities to help deliver scale,” he said. “Laggards may let the opportunity slip away if they’re not careful.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,499
2,021
"New deep learning model brings image segmentation to edge devices | VentureBeat"
"https://venturebeat.com/ai/new-deep-learning-model-brings-image-segmentation-to-edge-devices"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages New deep learning model brings image segmentation to edge devices Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A new neural network architecture designed by artificial intelligence researchers at DarwinAI and the University of Waterloo will make it possible to perform image segmentation on computing devices with low-power and -compute capacity. Segmentation is the process of determining the boundaries and areas of objects in images. We humans perform segmentation without conscious effort, but it remains a key challenge for machine learning systems. It is vital to the functionality of mobile robots, self-driving cars, and other artificial intelligence systems that must interact and navigate the real world. Until recently, segmentation required large, compute-intensive neural networks. This made it difficult to run these deep learning models without a connection to cloud servers. In their latest work, the scientists at DarwinAI and the University of Waterloo have managed to create a neural network that provides near-optimal segmentation and is small enough to fit on resource-constrained devices. Called AttendSeg, the neural network is detailed in a paper that has been accepted at this year’s Conference on Computer Vision and Pattern Recognition (CVPR). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Object classification, detection, and segmentation One of the key reasons for the growing interest in machine learning systems is the problems they can solve in computer vision. Some of the most common applications of machine learning in computer vision include image classification, object detection, and segmentation. Image classification determines whether a certain type of object is present in an image or not. Object detection takes image classification one step further and provides the bounding box where detected objects are located. Segmentation comes in two flavors: semantic segmentation and instance segmentation. Semantic segmentation specifies the object class of each pixel in an input image. Instance segmentation separates individual instances of each type of object. For practical purposes, the output of segmentation networks is usually presented by coloring pixels. Segmentation is by far the most complicated type of classification task. Above: Image classification vs. object detection vs. semantic segmentation (credit: codebasics ). The complexity of convolutional neural networks (CNN), the deep learning architecture commonly used in computer vision tasks, is usually measured in the number of parameters they have. The more parameters a neural network has the larger memory and computational power it will require. RefineNet, a popular semantic segmentation neural network, contains more than 85 million parameters. At 4 bytes per parameter, it means that an application using RefineNet requires at least 340 megabytes of memory just to run the neural network. And given that the performance of neural networks is largely dependent on hardware that can perform fast matrix multiplications, it means that the model must be loaded on the graphics card or some other parallel computing unit, where memory is more scarce than the computer’s RAM. Machine learning for edge devices Due to their hardware requirements, most applications of image segmentation need an internet connection to send images to a cloud server that can run large deep learning models. The cloud connection can pose additional limits to where image segmentation can be used. For instance, if a drone or robot will be operating in environments where there’s no internet connection, then performing image segmentation will become a challenging task. In other domains, AI agents will be working in sensitive environments and sending images to the cloud will be subject to privacy and security constraints. The lag caused by the roundtrip to the cloud can be prohibitive in applications that require real-time response from the machine learning models. And it is worth noting that network hardware itself consumes a lot of power, and sending a constant stream of images to the cloud can be taxing for battery-powered devices. For all these reasons (and a few more), edge AI and tiny machine learning (TinyML) have become hot areas of interest and research both in academia and in the applied AI sector. The goal of TinyML is to create machine learning models that can run on memory- and power-constrained devices without the need for a connection to the cloud. Above: The architecture of AttendSeg on-device semantic segmentation neural network. With AttendSeg, the researchers at DarwinAI and the University of Waterloo tried to address the challenges of on-device semantic segmentation. “The idea for AttendSeg was driven by both our desire to advance the field of TinyML and market needs that we have seen as DarwinAI,” Alexander Wong, co-founder at DarwinAI and Associate Professor at the University of Waterloo, told TechTalks. “There are numerous industrial applications for highly efficient edge-ready segmentation approaches, and that’s the kind of feedback along with market needs that I see that drives such research.” The paper describes AttendSeg as “a low-precision, highly compact deep semantic segmentation network tailored for TinyML applications.” The AttendSeg deep learning model performs semantic segmentation at an accuracy that is almost on-par with RefineNet while cutting down the number of parameters to 1.19 million. Interestingly, the researchers also found that lowering the precision of the parameters from 32 bits (4 bytes) to 8 bits (1 byte) did not result in a significant performance penalty while enabling them to shrink the memory footprint of AttendSeg by a factor of four. The model requires little above one megabyte of memory, which is small enough to fit on most edge devices. “[8-bit parameters] do not pose a limit in terms of generalizability of the network based on our experiments, and illustrate that low precision representation can be quite beneficial in such cases (you only have to use as much precision as needed),” Wong said. Above: Experiments show AttendSeg provides optimal semantic segmentation while cutting down the number of parameters and memory footprint. Attention condensers for computer vision AttendSeg leverages “attention condensers” to reduce model size without compromising performance. Self-attention mechanisms are a series that improve the efficiency of neural networks by focusing on information that matters. Self-attention techniques have been a boon to the field of natural language processing. They have been a defining factor in the success of deep learning architectures such as Transformers. While previous architectures such as recurrent neural networks had a limited capacity on long sequences of data, Transformers used self-attention mechanisms to expand their range. Deep learning models such as GPT-3 leverage Transformers and self-attention to churn out long strings of text that ( at least superficially ) maintain coherence over long spans. AI researchers have also leveraged attention mechanisms to improve the performance of convolutional neural networks. Last year, Wong and his colleagues introduced attention condensers as a very resource-efficient attention mechanism and applied them to image classifier machine learning models. “[Attention condensers] allow for very compact deep neural network architectures that can still achieve high performance, making them very well suited for edge/TinyML applications,” Wong said. Above: Attention condensers improve the performance of convolutional neural networks in a memory-efficient way. Machine-driven design of neural networks One of the key challenges of designing TinyML neural networks is finding the best performing architecture while also adhering to the computational budget of the target device. To address this challenge, the researchers used “ generative synthesis ,” a machine learning technique that creates neural network architectures based on specified goals and constraints. Basically, instead of manually fiddling with all kinds of configurations and architectures, the researchers provide a problem space to the machine learning model and let it discover the best combination. “The machine-driven design process leveraged here (Generative Synthesis) requires the human to provide an initial design prototype and human-specified desired operational requirements (e.g., size, accuracy, etc.) and the MD design process takes over in learning from it and generating the optimal architecture design tailored around the operational requirements and task and data at hand,” Wong said. For their experiments, the researchers used machine-driven design to tune AttendSeg for Nvidia Jetson, hardware kits for robotics and edge AI applications. But AttendSeg is not limited to Jetson. “Essentially, the AttendSeg neural network will run fast on most edge hardware compared to previously proposed networks in literature,” Wong said. “However, if you want to generate an AttendSeg that is even more tailored for a particular piece of hardware, the machine-driven design exploration approach can be used to create a new highly customized network for it.” AttendSeg has obvious applications for autonomous drones, robots, and vehicles, where semantic segmentation is a key requirement for navigation. But on-device segmentation can have many more applications. “This type of highly compact, highly efficient segmentation neural network can be used for a wide variety of things, ranging from manufacturing applications (e.g., parts inspection / quality assessment, robotic control) medical applications (e.g., cell analysis, tumor segmentation), satellite remote sensing applications (e.g., land cover segmentation), and mobile application (e.g., human segmentation for augmented reality),” Wong said. Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. This story originally appeared on Bdtechtalks.com. Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,500
2,014
"OpenWorm project wants you to help create the world's first digital organism | VentureBeat"
"https://venturebeat.com/mobile/openworm-is-going-to-be-a-digital-organism-in-your-browser"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenWorm project wants you to help create the world’s first digital organism Share on Facebook Share on X Share on LinkedIn OpenWorm is a digital organism. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Stephen Larson wants to create the world’s first digital organism, and he starting simple — with a worm. Larson has launched a Kickstarter crowdfunding campaign to raise money to create OpenWorm as part of an effort to accelerate our understanding of the human brain. If his team succeeds, he says, artificial intelligence will move forward and possibly even provide clues to brain diseases like Alzheimer’s and Parkinson’s. Those who support the project will get a digital copy of the worm, dubbed WormSim, to put on their browser. The idea is to create an interactive worm based on its real biology. Larson’s team is also creating something called the OpenWorm Academy, where contributors to its Kickstarter campaign can glean their rewards in the form of an online course in digital biology. The course will also give participants “a behind-the-scenes look at OpenWorm,” according to the program’s Kickstarter page. Backers will be able to modify the code for the worm as they wish in a quest to create an artificial sentient life. Once you get the 3D-animated WormSim, you can inspect the model and try to figure out what makes it tick. Each cell lights up and reveals its activity. You can click on parts like its muscles and interact with it in a 3D environment. Larson notes that scientists don’t fully understand the brain of a simple creature like a C. Elegans worm, a microscopic worm that has just 1,000 cells. “If we can’t build a computer model of a worm, the most studied organism in all of biology, we don’t stand a chance at understanding something as complex as the human brain,” he said. “OpenWorm gives the world front row access to the cutting edge of digital biology.” Above: Stephen Larson Back in 2011, Larson got the idea from watching Star Trek: The Next Generation, which featured Lt. Commander Data, an artificial intelligence officer. He thought that scientists would be able to create Data by starting out with a worm. A bunch of other academics liked the idea, and they created an open science project. So far, the project has raised $27,156 from 312 backers. It still has 19 days to go in a campaign aimed at raising $120,000. They’re raising the money for core engineering, administration, and educational outreach. They want to build a better, more accurate version of the worm. While Larson is in San Diego, Calif., his team is spread across the world. The team includes John White, Matteo Cantarelli, Giovanni Idili, Sergey Khayrulin, Andrey Palyanov, and Balasz Szigeti. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,501
2,022
"DALL-E 2 coming to Microsoft's Azure AI, by invitation | VentureBeat"
"https://venturebeat.com/ai/dall-e-2-coming-to-microsofts-azure-ai-by-invitation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DALL-E 2 coming to Microsoft’s Azure AI, by invitation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. DALL-E 2 is coming to Microsoft’s Azure OpenAI Service by invitation, allowing select Azure AI customers to generate custom images using text or images. The company made the announcement today at Microsoft Ignite 2022, a conference for developers and IT professionals. “Mattel is actually already using this for their Hot Wheels cars,” said John Montgomery, corporate vice president for Microsoft’s Azure AI platform. “Designers can actually give it prompts and quickly get ideas and tweak modifications.” A Microsoft blog post gave an example of Mattel designers typing in a prompt such as “a scale model of a classic car” and DALL-E 2 will generate an image of a toy vintage car, perhaps silver in color and with whitewall tires. Then, the designer could erase the top of the car and type, “Make it a convertible” and DALL-E 2 will update the image of the car as a convertible, and then tweak it to add “pink top.” In the blog post, Carrie Buse, director of product design at Mattel Future Lab, said she sees artificial intelligence (AI) technology such as DALL-E 2 as a tool to help designers generate more ideas. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Ultimately, quality is the most important thing,” she noted. “But sometimes quantity can help you find the quality.” The Azure OpenAI Service is currently available in preview with limited access and has been used by customers and partners to access powerful GPT-3 models for common use cases including writing assistance, natural language-to-code generation and parsing data. Microsoft says it added a responsible AI layer Adding DALL-E 2 to Azure OpenAI Service will allow customers to generate creative content backed by Azure’s cloud AI infrastructure, enterprise-grade security and compliance features. Microsoft also claims its built-in responsible AI features will help prevent DALL-E 2 from delivering inappropriate outputs. The company says it removed images from the model training dataset that contain sexual or violent content. It also maintains Azure OpenAI’s filters remove harmful content from prompts and prevent DALL-E 2 from creating images of celebrities and religious objects, as well as “objects that are commonly used to try to trick the system into generating sexual or violent content.” On the output side, the Azure AI team added models that remove AI-generated images that appear to contain adult, gore and other types of inappropriate content. “We’re taking the model, putting it on Azure and bringing all the enterprise credibility and technologies we have there — the security, the compliance, the regional rollouts, everything else, we’re adding a layer around it, kind of our responsible AI,” said Montgomery. “OpenAI has its layers and then we have additional layers on top.” Addressing DALL-E ownership issues The Microsoft blog post also emphasized that the Azure OpenAI Service terms today “does not claim ownership of the output of these services.” Other than for its acceptable use policies, “Microsoft’s terms do not restrict the commercialization of images generated by these services, although customers are ultimately responsible for making their own decisions about the commercial usability of images they generate.” Those comments come as users and experts continue to raise questions about who owns DALL-E images. When OpenAI announced expanded beta access to DALL-E in July, the company offered paid subscription users full usage rights to reprint, sell and merchandise the images they create with the powerful text-to-image generator. And in late September, OpenAI announced that the research laboratory was removing the waitlist for its DALL-E beta, allowing anyone to sign up — citing improved safety systems and lessons learned from real-world use. Bradford Newman, who leads the machine learning and AI practice of global law firm Baker McKenzie, in its Palo Alto office, said the answer to the question “Who owns DALL-E images?” is far from clear. And, he emphasized, legal fallout is inevitable. “If DALL-E is adopted in the way I think [Open AI] envisions it, there’s going to be a lot of revenue generated by the use of the tool,” he told VentureBeat in August. “And when you have a lot of players in the market and issues at stake, you have a high chance of litigation.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,502
2,022
"3 essential abilities AI is missing | VentureBeat"
"https://venturebeat.com/ai/3-essential-abilities-ai-is-missing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 3 essential abilities AI is missing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Throughout the past decade, deep learning has come a long way from a promising field of artificial intelligence (AI) research to a mainstay of many applications. However, despite progress in deep learning, some of its problems have not gone away. Among them are three essential abilities: To understand concepts, to form abstractions and to draw analogies — that’s according to Melanie Mitchell, professor at the Santa Fe Institute and author of “Artificial Intelligence: A Guide for Thinking Humans.” During a recent seminar at the Institute of Advanced Research in Artificial Intelligence, Mitchell explained why abstraction and analogy are the keys to creating robust AI systems. While the notion of abstraction has been around since the term “artificial intelligence” was coined in 1955 , this area has largely remained understudied, Mitchell says. As the AI community puts a growing focus and resources toward data-driven, deep learning–based approaches, Mitchell warns that what seems to be a human-like performance by neural networks is, in fact, a shallow imitation that misses key components of intelligence. From concepts to analogies “There are many different definitions of ‘concept’ in the cognitive science literature, but I particularly like the one by Lawrence Barsalou : A concept is ‘a competence or disposition for generating infinite conceptualizations of a category,’” Mitchell told VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For example, when we think of a category like “trees,” we can conjure all kinds of different trees, both real and imaginary, realistic or cartoonish, concrete or metaphorical. We can think about natural trees, family trees or organizational trees. “There is some essential similarity — call it ‘treeness’ — among all these,” Mitchell said. “In essence, a concept is a generative mental model that is part of a vast network of other concepts.” While AI scientists and researchers often refer to neural networks as learning concepts, the key difference that Mitchell points out is what these computational architectures learn. While humans create “ generative ” models that can form abstractions and use them in novel ways, deep learning systems are “discriminative” models that can only learn shallow differences between different categories. For instance, a deep learning model trained on many labeled images of bridges will be able to detect new bridges, but it won’t be able to look at other things that are based on the same concept — such as a log connecting two river shores or ants that form a bridge to fill a gap, or abstract notions of “bridge,” such as bridging a social gap. Discriminative models have pre-defined categories for the system to choose among — e.g., is the photo a dog, a cat, or a coyote? Rather, to flexibly apply one’s knowledge to a new situation, Mitchell explained. “One has to generate an analogy — e.g., if I know about something about trees, and see a picture of a human lung, with all its branching structure, I don’t classify it as a tree, but I do recognize the similarities at an abstract level — I am taking what I know, and mapping it onto a new situation,” she said. Why is this important? The real world is filled with novel situations. It is important to learn from as few examples as possible and be able to find connections between old observations and new ones. Without the capacity to create abstractions and draw analogies—the generative model—we would need to see infinite training examples to be able to handle every possible situation. This is one of the problems that deep neural networks currently suffer from. Deep learning systems are extremely sensitive to “out of distribution” (OOD) observations, instances of a category that are different from the examples the model has seen during training. For example, a convolutional neural network trained on the ImageNet dataset will suffer from a considerable performance drop when faced with real-world images where the lighting or the angle of objects is different from the training set. Likewise, a deep reinforcement learning system trained to play the game Breakout at a superhuman level will suddenly deteriorate when a simple change is made to the game, such as moving the paddle a few pixels up or down. In other cases, deep learning models learn the wrong features in their training examples. In one study, Mitchell and her colleagues examined a neural network trained to classify images between “animal” and “no animal.” They found that instead of animals, the model had learned to detect images with blurry backgrounds — in the training dataset, the images of animals were focused on the animals and had blurry backgrounds while non-animal images had no blurry parts. “More broadly, it’s easier to ‘cheat’ with a discriminative model than with a generative model — sort of like the difference between answering a multiple-choice versus an essay question,” Mitchell said. “If you just choose from a number of alternatives, you might be able to perform well even without really understanding the answer; this is harder if you have to generate an answer.” Abstractions and analogies in deep learning The deep learning community has taken great strides to address some of these problems. For one, “ explainable AI ” has become a field of research for developing techniques to determine the features neural networks are learning and how they make decisions. At the same time, researchers are working on creating balanced and diversified training datasets to make sure deep learning systems remain robust in different situations. The field of unsupervised and self- supervised learning aims to help neural networks learn from unlabeled data instead of requiring predefined categories. One field that has seen remarkable progress is large language models (LLM), neural networks trained on hundreds of gigabytes of unlabeled text data. LLMs can often generate text and engage in conversations in ways that are consistent and very convincing, and some scientists claim that they can understand concepts. However, Mitchell argues, that if we define concepts in terms of abstractions and analogies, it is not clear that LLMs are really learning concepts. For example, humans understand that the concept of “plus” is a function that combines two numerical values in a certain way, and we can use it very generally. On the other hand, large language models like GPT-3 can correctly answer simple addition problems most of the time but sometimes make “non-human-like mistakes” depending on how the problem is asked. “This is evidence that [LLMs] don’t have a robust concept of ‘plus’ like we do, but are using some other mechanism to answer the problems,” Mitchell said. “In general, I don’t think we really know how to determine in general if a LLM has a robust human-like concept — this is an important question.” Recently, scientists have created several benchmarks that try to assess the capacity of deep learning systems to form abstractions and analogies. An example is RAVEN , a set of problems that evaluate the capacity to detect concepts such as numerosity, sameness, size difference and position difference. However, experiments show that deep learning systems can cheat such benchmarks. When Mitchell and her colleagues examined a deep learning system that scored very high on RAVEN, they realized that the neural network had found “shortcuts” that allowed it to predict the correct answer without even seeing the problem. “Existing AI benchmarks in general (including benchmarks for abstraction and analogy) don’t do a good enough job of testing for actual machine understanding rather than machines using shortcuts that rely on spurious statistical correlations,” Mitchell said. “Also, existing benchmarks typically use a random ‘training/test’ split, rather than systematically testing if a system can generalize well.” Another benchmark is the Abstract Reasoning Corpus (ARC), created by AI researcher, François Chollet. ARC is particularly interesting because it contains a very limited number of training examples, and the test set is composed of challenges that are different from the training set. ARC has become the subject of a contest on the Kaggle data science and machine learning platform. But so far, there has been very limited progress on the benchmark. “I really like Francois Chollet’s ARC benchmark as a way to deal with some of the problems/limitations of current AI and AI benchmarks,” Mitchell said. She noted that she sees promise in the work being done at the intersection of AI and developmental learning , or “looking at how children learn and how that might inspire new AI approaches.” What will be the right architecture to create AI systems that can form abstractions and analogies like humans remains an open question. Deep learning pioneers believe that bigger and better neural networks will eventually be able to replicate all functions of human intelligence. Other scientists believe that we need to combine deep learning with symbolic AI. What is for sure is that as AI becomes more prevalent in applications we use every day, it will be important to create robust systems that are compatible with human intelligence and work — and fail — in predictable ways. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,503
2,023
"US DoD AI chief on LLMs: 'I need hackers to tell us how this stuff breaks' | VentureBeat"
"https://venturebeat.com/ai/us-dod-ai-chief-on-llms-i-need-hackers-to-tell-us-how-this-stuff-breaks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages US DoD AI chief on LLMs: ‘I need hackers to tell us how this stuff breaks’ Share on Facebook Share on X Share on LinkedIn Craig Martell of the Defense Department at DEF CON Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. On the main stage at the DEF CON security conference in a Friday afternoon session (Aug. 11), Craig Martell, chief digital and AI officer at the U.S. Defense Department (DoD), came bearing a number of key messages. First off, he wants people to understand that large language models (LLMs) are not sentient and aren’t actually able to reason. Martell and the DoD also want more rigor in model development to help limit the risks of hallucination — wherein AI chatbots generate false information. Martell, who is also an adjunct professor at Northeastern University teaching machine learning (ML), treated the mainstage DEF CON session like a lecture, repeatedly asking the audience for opinions and answers. AI overall was a big topic at DEF CON, with the AI Village , a community of hackers and data scientists, hosting an LLM hacking competition. Whether it’s at a convention like DEF CON or as part of bug bounty efforts, Martell wants more research into LLMs’ potential vulnerabilities. Hen helps lead the DoD’s Task Force LIMA , an effort to understand the potential and the limitations of generative AI and LLMs in the DoD. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “I’m here today because I need hackers everywhere to tell us how this stuff breaks,” Martell said. “Because if we don’t know how it breaks, we can’t get clear on the acceptability conditions and if we can’t get clear on the acceptability conditions we can’t push industry towards building the right thing, so that we can deploy it and use it.” LLMs are great but they don’t actually reason Martell spent a lot of time during his session pointing out that LLMs don’t actually reason. In his view, the current hype cycle surrounding generative AI has led to some misplaced hype and understanding about what an LLM can and cannot do. “We evolved to treat things that speak fluently as reasoning beings,” Martell said. He explained that at the most basic level a large language model is a model that predicts the next word, given the prior words. LLMs are trained on massive volumes of data with immense computing power, but he stresses that an LLM is just one big statistical model that relies on past context. “They seem really fluent, because you can predict a whole sequence of next words based upon a massive context that makes it sound really complex,” he said. The lack of reasoning is tied to the phenomenon of hallucination in Martell’s view. He argued that a primary focus of LLMs is fluency and not reasoning, and that the pursuit of fluency leads to errors — specifically, hallucinations. “We as humans, I believe, are duped by fluency,” he said. Identifying every hallucination is hard and that’s another key concern for Martell. For example, he asked rhetorically, if he were to generate 30 paragraphs of text, how easy would it be to decide what’s a hallucination and what’s not? Obviously, it would take some time. “You also often want to use large language models in a context where you’re not an expert. That’s one of the real values of a large language model: … asking questions where you don’t have expertise,” Martell said. “My concern is that the thing that the model gets wrong [imposes] a high cognitive load [on a human trying] to determine whether it’s right or whether it’s wrong.” Future LLMs need ‘five nines’ of reliability What Martell wants to happen is more testing and the development of acceptability conditions for LLMs in different use cases. The acceptability conditions will come with metrics that can demonstrate how accurate a model is and how often it generates hallucinations. As the person responsible for AI at the DoD, Martell said that if a soldier in the field is asking an LLM a question about how to set up a new technology, there needs to be a high degree of accuracy. “I need five nines [99.999% accuracy] of correctness,” he said. “I cannot have a hallucination that says: ‘Oh yeah, put widget A connected to widget B’ — and it blows up.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,504
2,022
"Report: Data access hurdles affect AI adoption for 71% of enterprises | VentureBeat"
"https://venturebeat.com/data-infrastructure/report-data-access-hurdles-affect-ai-adoption-for-71-of-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Data access hurdles affect AI adoption for 71% of enterprises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Even as decision-makers and CXOs remain bullish on AI’s potential, enterprises are struggling to make the most of it at the ground level. Case in point: a new report from data integration giant Fivetran that says 71% of companies find it difficult to access all the data needed to run AI programs, workloads and models. Working with Vanson Bourne, the company surveyed 550 IT and data science professionals in multiple countries and found gaps in data movement and access across their organizations. The finding is significant as data is vital for model training and implementation. One cannot run a successful AI program without laying a solid foundation for data storage and movement, starting with a data warehouse or lake to automate data ingestion and pre-processing. “Analytic teams that utilize a modern data stack can more readily extend the value of their data and maximize their investments in AI and data science,” George Fraser, CEO of Fivetran, said in the study. Data access obstacles In the survey , almost all of the respondents confirmed that they collect and use data from operational systems on some level. However, 69% said they struggle to access the right information at the right time, while at least 73% claimed to face difficulty extracting, loading and transforming the data and translating it into practical advice and insights for decision-makers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As a result, even though a large number of organizations (87%) consider AI vital for business survival, they fail to make the most of it. Their broken, manual data processes lead to inaccurate models, eventually resulting in a lack of trust and circling back to humans. The survey respondents claimed that inefficient data processes force them to rely on human-led decision-making 71% of the time. In fact, only 14% of them claimed to have achieved advanced AI maturity — using general-purpose AI to automatically make predictions and business decisions. On top of that, there’s significant financial impact, with respondents estimating they are losing out on an average of 5% of global annual revenues due to models built using inaccurate or low-quality data. Talent gets wasted The challenges associated with data movement, processing and availability also mean that the talent hired to build AI models ends up wasting time on tasks outside of their main job. In the Fivetran survey, the respondents claimed that their data scientists devote 70% of their time on average to just preparing data. As many as 87% of respondents agreed that the data science talent within their organization is not being utilized to its full potential. According to Fortune Business Insights , the global AI market is projected to grow from $387.45 billion in 2022 to $1,394.30 billion by 2029, with a CAGR of 20.1% VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,505
2,022
"Adobe commits to transparency in use of generative AI | VentureBeat"
"https://venturebeat.com/ai/adobe-commits-to-transparency-in-use-of-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Adobe commits to transparency in use of generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, at Adobe MAX, billed as the world’s largest creativity conference, Adobe announced its commitment to support creatives by ensuring transparency in the use of generative AI tools. In a year dominated by the rise of generative AI tools – such as OpenAI’s DALL-E 2, Google’s Imagen, Stable Diffusion and MidJourney – Adobe, the world’s leading computer graphics software company, said its approach to developing creator-centric generative AI offerings would leverage its Content Authenticity Initiative (CAI) standards and invest in new research to support creatives’ control over their style and work. The CAI is an Adobe-led initiative that enables creators to securely attach provenance data to digital content, helping ensure creators get credit for their work and audiences understand who made a piece of content and how it was created. The news comes as artists say they have no control over AI image generators copying their style to make thousands of new images, while legal experts have weighed in on questions around ownership of images generated by AI tools. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Adobe says it’s experimenting with generative AI “Adobe, like other innovators, has been experimenting with generative AI,” said Scott Belsky, chief product officer and executive vice president, Adobe Creative Cloud, in a blog post tied to the announcement. “It is a transformational technology, one that will accelerate the ways artists brainstorm and explore creative avenues.” That said, Belsky added generative AI raises valid concerns. “Among the questions, how is the work of creative people being used to train the AI models? And how will we know whether something we see was created by a human or a computer?” Belsky said Adobe, which is known for flagship products such as Photoshop and Illustrator as well as for its mobile app Adobe Express and its SaaS offering, Creative Cloud, is “early” in their journey to integrate generative AI into Adobe creative tools. “But let’s imagine, for instance, AI within Photoshop that generates rich, editable PSDs,” he said. The AI could generate a dozen different approaches, he explained, that a creative professional could choose from to explore further using Photoshop’s full selection of tools. Or, generative AI incorporated into Adobe Express could help less experienced creators. “Rather than having to find a premade template to start a project with, Express users could generate a template through a prompt and use generative AI to add an object to the scene,” he said. “But they still have full control.” Belsky said Adobe sees generative AI as a “hyper-competent creative assistant” that will “multiply what creators can achieve by presenting new images and alternative approaches, but will never replace what we value in art: human imagination, an idiosyncratic style, and a unique personal story.” New AI capabilities across Creative Cloud and Express At MAX, the company also unveiled new AI-driven capabilities across Creative Cloud apps and Adobe Express, focused on maximizing efficiency and creativity. Creative Cloud already incorporates a variety of AI-powered features powered by Adobe’s AI engine, Sensei, including Neural Filters in Photoshop, a feature it added in 2020. Most notably, the company added Select People, a new Adobe Lightroom tool that automatically detects a person within a photograph, then creates masks specific to their facial skin, body skin, eyebrow, iris/pupil, lips, teeth, mouth, and hair. And new AI capabilities in Adobe Express give creators access to functionality from Adobe’s flagship creative tools, including Photoshop and Illustrator. They can instantly resize videos and images for quick sharing on social media, find ideal color palettes, and canvas over 20,000 Adobe Fonts. “They are really about maximizing efficiency — so reducing those mundane, repetitive tasks, and helping creatives just focus on their creativity,” said Deepa Subramaniam, vice President of product marketing, professional creativity at Adobe. “We’re continuing to incorporate innovation with Sensei AI, and it’s really at the center of everything that we do.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,506
2,022
"AI goes multilingual with Hugging Face's BLOOM | VentureBeat"
"https://venturebeat.com/ai/ai-goes-multilingual-with-hugging-faces-bloom-large-language-model"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI goes multilingual with Hugging Face’s BLOOM Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. With all the excitement and innovations surrounding artificial intelligence (AI) in recent years, one key thing has often been left behind – support for multiple languages, beyond just English. That’s now going to change, thanks in part to the launch of BLOOM (which is an acronym for BigScience Large Open-science Open-access Multilingual Language Model). BLOOM got its start in 2021, with development led by machine learning startup Hugging Face , which raised $100 million in May. The BigScience effort also benefits from a wide array of contributors including Nvidia’s Megatron and the Microsoft DeepSpeed teams, as well as receiving support from CNRS , the French National Research Agency. The BLOOM model was built and trained using the Jean Zay supercomputer that is located in France. BLOOM has an architecture that is similar to OpenAI’s GPT-3 large language model, but with the key fundamental difference being that BLOOM is multilingual. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “GPT-3 is monolingual and BLOOM was designed from the start to be multilingual so it was trained on several languages, and also to incorporate a significant amount of programming language data,” Teven Le Scao, research engineer at Hugging Face, told VentureBeat. “BLOOM supports 46 human languages and 13 programming languages — so that’s a very sizable difference.” How BLOOM was trained with open-source machine learning models The BLOOM effort involved multiple components including collecting a large dataset and then building a training model. Le Scao explained that Hugging Face made use of Nvidia’s Megatron and Microsoft’s DeepSpeed open-source projects, which are both efforts designed to enable data scientists to train large language models. Both Megatron and DeepSpeed are based on the open-source PyTorch machine learning framework. For BLOOM, the researchers developed a fork of the Megatron and DeepSpeed projects that enabled the model to look at all the different languages. In terms of BLOOM itself, the project was developed in the open and makes use of its own open license that is modeled on the Responsible AI license. “We’re trying to define what open source means in the context of large AI models, because they don’t really work like software does,” Le Scao said. He explained that the goal of the licensing for BLOOM was to make the model as open as possible, while still retaining a degree of control on the use cases that organizations have for the model. How large language models fit into natural language processing Large language models (LLM) are a subset of the overall field of natural language processing (NLP). Le Scao said that the language model is like an “atomic unit” for NLP, providing the building-block components on which complex AI interactions and applications can be built. For example, he noted that it doesn’t make sense for an NLP model to learn how to do summarization as well as speak a language at the same time. Le Scao said that a human doesn’t learn how to speak English and then write a full research report at the same time. Typically it makes sense for the human to learn how to speak the language first. Use cases for multilanguage models like BLOOM To date, most AI language models have used either English or Chinese. BLOOM will now extend the use cases, notably for French, Spanish and Arabic speakers, where there has not been an open LLM available before. In addition to providing a new foundation for multiple spoken human languages, BLOOM could enable a new era for code development as well. The use of AI for code development is a relatively nascent space, with GitHub’s Copilot, which became generally available at the end of June, being among the early leaders. Le Scao expects that due to the diversity of programming languages that BLOOM understands, it will help to enable new applications for developers. “BLOOM is going to be a strong platform for coding applications,” Le Scao said. Now that BLOOM is ready for usage, Le Scao also expects that new and unexpected use cases will emerge. “This is the fun part, because we’ve done all the hard work of getting BLOOM to run, and now everyone can run whatever crazy experiment they want from a powerful language model,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,507
2,022
"DALL-E 2, the future of AI research, and OpenAI’s business model | VentureBeat"
"https://venturebeat.com/ai/dall-e-2-the-future-of-ai-research-and-openais-business-model"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DALL-E 2, the future of AI research, and OpenAI’s business model Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence research lab OpenAI made headlines again, this time with DALL-E 2, a machine learning model that can generate stunning images from text descriptions. DALL-E 2 builds on the success of its predecessor DALL-E and improves the quality and resolution of the output images thanks to advanced deep learning techniques. The announcement of DALL-E 2 was accompanied with a social media campaign by OpenAI’s engineers and its CEO, Sam Altman, who shared wonderful photos created by the generative machine learning model on Twitter. DALL-E 2 shows how far the AI research community has come toward harnessing the power of deep learning and addressing some of its limits. It also provides an outlook of how generative deep learning models might finally unlock new creative applications for everyone to use. At the same time, it reminds us of some of the obstacles that remain in AI research and disputes that need to be settled. The beauty of DALL-E 2 Like other milestone OpenAI announcements, DALL-E 2 comes with a detailed paper and an interactive blog post that shows how the machine learning model works. There’s also a video that provides an overview of what the technology is capable of doing and what its limitations are. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! DALL-E 2 is a “generative model,” a special branch of machine learning that creates complex output instead of performing prediction or classification tasks on input data. You provide DALL-E 2 with a text description, and it generates an image that fits the description. Generative models are a hot area of research that received much attention with the introduction of generative adversarial networks (GAN) in 2014. The field has seen tremendous improvements in recent years, and generative models have been used for a vast variety of tasks, including creating artificial faces, deepfakes , synthesized voices and more. However, what sets DALL-E 2 apart from other generative models is its capability to maintain semantic consistency in the images it creates. For example, the following images (from the DALL-E 2 blog post) are generated from the description “An astronaut riding a horse.” One of the descriptions ends with “as a pencil drawing” and the other “in photorealistic style.” The model remains consistent in drawing the astronaut sitting on the back of the horse and holding their hands in front. This kind of consistency shows itself in most examples OpenAI has shared. The following examples (also from OpenAI’s website) show another feature of DALL-E 2, which is to generate variations of an input image. Here, instead of providing DALL-E 2 with a text description, you provide it with an image, and it tries to generate other forms of the same image. Here, DALL-E maintains the relations between the elements in the image, including the girl, the laptop, the headphones, the cat, the city lights in the background, and the night sky with moon and clouds. Other examples suggest that DALL-E 2 seems to understand depth and dimensionality, a great challenge for algorithms that process 2D images. Even if the examples on OpenAI’s website were cherry-picked, they are impressive. And the examples shared on Twitter show that DALL-E 2 seems to have found a way to represent and reproduce the relationships between the elements that appear in an image, even when it is “dreaming up” something for the first time. In fact, to prove how good DALL-E 2 is, Altman took to Twitter and asked users to suggest prompts to feed to the generative model. The results (see the thread below) are fascinating. The science behind DALL-E 2 DALL-E 2 takes advantage of CLIP and diffusion models, two advanced deep learning techniques created in the past few years. But at its heart, it shares the same concept as all other deep neural networks: representation learning. Consider an image classification model. The neural network transforms pixel colors into a set of numbers that represent its features. This vector is sometimes also called the “embedding” of the input. Those features are then mapped to the output layer, which contains a probability score for each class of image that the model is supposed to detect. During training, the neural network tries to learn the best feature representations that discriminate between the classes. Ideally, the machine learning model should be able to learn latent features that remain consistent across different lighting conditions, angles and background environments. But as has often been seen, deep learning models often learn the wrong representations. For example, a neural network might think that green pixels are a feature of the “sheep” class because all the images of sheep it has seen during training contain a lot of grass. Another model that has been trained on pictures of bats taken during the night might consider darkness a feature of all bat pictures and misclassify pictures of bats taken during the day. Other models might become sensitive to objects being centered in the image and placed in front of a certain type of background. Learning the wrong representations is partly why neural networks are brittle, sensitive to changes in the environment and poor at generalizing beyond their training data. It is also why neural networks trained for one application need to be fine-tuned for other applications — the features of the final layers of the neural network are usually very task-specific and can’t generalize to other applications. In theory, you could create a huge training dataset that contains all kinds of variations of data that the neural network should be able to handle. But creating and labeling such a dataset would require immense human effort and is practically impossible. This is the problem that Contrastive Learning-Image Pre-training (CLIP) solves. CLIP trains two neural networks in parallel on images and their captions. One of the networks learns the visual representations in the image and the other learns the representations of the corresponding text. During training, the two networks try to adjust their parameters so that similar images and descriptions produce similar embeddings. One of the main benefits of CLIP is that it does not need its training data to be labeled for a specific application. It can be trained on the huge number of images and loose descriptions that can be found on the web. Additionally, without the rigid boundaries of classic categories, CLIP can learn more flexible representations and generalize to a wide variety of tasks. For example, if an image is described as “a boy hugging a puppy” and another described as “a boy riding a pony,” the model will be able to learn a more robust representation of what a “boy” is and how it relates to other elements in images. CLIP has already proven to be very useful for zero-shot and few-shot learning , where a machine learning model is shown on-the-fly to perform tasks that it hasn’t been trained for. The other machine learning technique used in DALL-E 2 is “diffusion,” a kind of generative model that learns to create images by gradually noising and denoising its training examples. Diffusion models are like autoencoders , which transform input data into an embedding representation and then reproduce the original data from the embedding information. DALL-E trains a CLIP model on images and captions. It then uses the CLIP model to train the diffusion model. Basically, the diffusion model uses the CLIP model to generate the embeddings for the text prompt and its corresponding image. It then tries to generate the image that corresponds to the text. Disputes over deep learning and AI research For the moment, DALL-E 2 will only be made available to a limited number of users who have signed up for the waitlist. Since the release of GPT-2 , OpenAI has been reluctant to release its AI models to the public. GPT-3, its most advanced language model, is only available through an API interface. There’s no access to the actual code and parameters of the model. OpenAI’s policy of not releasing its models to the public has not rested well with the AI community and has attracted criticism from some renowned figures in the field. DALL-E 2 has also resurfaced some of the longtime disagreements over the preferred approach toward artificial general intelligence. OpenAI’s latest innovation has certainly proven that with the right architecture and inductive biases, you can still squeeze more out of neural networks. Proponents of pure deep learning approaches jumped on the opportunity to slight their critics, including a recent essay by cognitive scientist Gary Marcus entitled “ Deep Learning Is Hitting a Wall. ” Marcus endorses a hybrid approach that combines neural networks with symbolic systems. Based on the examples that have been shared by the OpenAI team, DALL-E 2 seems to manifest some of the common-sense capabilities that have so long been missing in deep learning systems. But it remains to be seen how deep this common-sense and semantic stability goes, and how DALL-E 2 and its successors will deal with more complex concepts such as compositionality. The DALL-E 2 paper mentions some of the limitations of the model in generating text and complex scenes. Responding to the many tweets directed his way, Marcus pointed out that the DALL-E 2 paper in fact proves some of the points he has been making in his papers and essays. Some scientists have pointed out that despite the fascinating results of DALL-E 2, some of the key challenges of artificial intelligence remain unsolved. Melanie Mitchell, professor of complexity at the Santa Fe Institute, raised some important questions in a Twitter thread. Mitchell referred to Bongard problems , a set of challenges that test the understanding of concepts such as sameness, adjacency, numerosity, concavity/convexity and closedness/openness. “We humans can solve these visual puzzles due to our core knowledge of basic concepts and our abilities of flexible abstraction and analogy,” Mitchell tweeted. “If such an AI system were created, I would be convinced that the field is making real progress on human-level intelligence. Until then, I will admire the impressive products of machine learning and big data, but will not mistake them for progress toward general intelligence.” The business case for DALL-E 2 Since switching from non-profit to a “capped profit” structure, OpenAI has been trying to find the balance between scientific research and product development. The company’s strategic partnership with Microsoft has given it solid channels to monetize some of its technologies, including GPT-3 and Codex. In a blog post, Altman suggested a possible DALL-E 2 product launch in the summer. Many analysts are already suggesting applications for DALL-E 2, such as creating graphics for articles (I could certainly use some for mine) and doing basic edits on images. DALL-E 2 will enable more people to express their creativity without the need for special skills with tools. Altman suggests that advances in AI are taking us toward “a world in which good ideas are the limit for what we can do, not specific skills.” In any case, the more interesting applications of DALL-E will surface as more and more users tinker with it. For example, the idea for Copilot and Codex emerged as users started using GPT-3 to generate source code for software. If OpenAI releases a paid API service a la GPT-3, then more and more people will be able to build apps with DALL-E 2 or integrate the technology into existing applications. But as was the case with GPT-3, building a business model around a potential DALL-E 2 product will have its own unique challenges. A lot of it will depend on the costs of training and running DALL-E 2, the details of which have not been published yet. And as the exclusive license holder to GPT-3’s technology, Microsoft will be the main winner of any innovation built on top of DALL-E 2 because it will be able to do it faster and cheaper. Like GPT-3, DALL-E 2 is a reminder that as the AI community continues to gravitate toward creating larger neural networks trained on ever-larger training datasets, power will continue to be consolidated in a few very wealthy companies that have the financial and technical resources needed for AI research. Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business and politics. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,508
2,022
"Google announces AI advances in text-to-video, language translation, more | VentureBeat"
"https://venturebeat.com/ai/google-announces-ai-advances-in-text-to-video-language-translation-more"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google announces AI advances in text-to-video, language translation, more Share on Facebook Share on X Share on LinkedIn Sundar Pichai speaks to crowd via video at Google AI event Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At a Google AI event this morning at the company’s Pier 57 offices in New York City, Google announced a variety of artificial intelligence (AI) advances, including in generative AI , language translation, health AI and disaster management. The event also focused heavily on a discussion around its efforts to build responsible AI , particularly related to control and safety, helping identify generative AI, and “building for everyone.” “We see so much opportunity ahead and are committed to making sure the technology is built in service of helping people, like any transformational technology,” Google CEO, Sundar Pichai, said in a video shared with attendees, in which the event was meant to “reimagine how technology can be helpful in people’s lives.” In addition, Pichai pointed out the risks and challenges that come with AI. “That’s why Google is focused on responsible AI from the beginning, publishing AI principles which prioritize the safety and privacy of people over anything else,” he said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google debuts Imagen Video — Phenaki combo Douglas Eck, principal scientist at Google Research and research director for Google’s Brain Team, shared a variety of Google generative AI announcements, including its cautious, slow efforts (compared to DALL-E 2 or Stability AI) to release its text-to-image AI systems. While Google Imagen is not yet available to the public, the company announced it will add a limited form of it to its AI Test Kitchen app (which this year showed off LaMDA ) as a way to collect early feedback. The company showed off a demo called City Dreamer in which users can generate images of a city designed around a theme, such as, say, pumpkins. In addition, building on its text-to-video work announced last month, Google shared the first rendering of a video that shares both of the company’s complementary text-to-video research approaches — Imagen Video and Phenaki. The result combines Phenaki ‘s ability to generate video with a sequence of text prompts with Imagen’s high-resolution detail. “I think it is amazing that we can talk about telling long-form stories like this with super-resolution video, not just from one prompt but a sequence of prompts, with a new way of storytelling,” said Douglas Eck, principal scientist at Google Research and research director for Google’s Brain Team, adding that he was excited about how filmmakers or video storytellers might make use of this technology. Other generative AI advances In the text space, Eck also discussed the LaMDA dialogue engine and the Wordcraft Writers Workshop, which challenged professional authors to write experimental fiction using LaMDA as a tool. Google will soon release a research paper on this, Eck said. “One clear finding is that using LaMDA to write full stories is a dead end,” he said. “It’s more useful to use LaMDA to add spice.” The user interface also has to be right, he added, serving as a “text editor with a purpose.” Eck also highlighted Google’s efforts to use AI to generate code, as well as recently introduced research from AudioLM which — with no need for a musical score — extends the audio from any audio clip entered – and DreamFusion, the recently-announced text-to-3D rendering that combines Imagen with NeRF’s 3D capabilities. “I’ve never seen quite so many advances in the generative space, the pace is really incredible,” he said. Google is building a universal speech translator After reviewing a variety of Google advances in language AI research, Google Brain leader Zoubin Ghahramani announced the company’s effort to reflect the diversity of the world’s languages and an ambitious stab at building a model that supports the world’s top 1000 languages. In addition, Google says it is building a universal speech model trained on over 400 languages, with the claim that it is the “largest language model coverage seen in a speech model today.” All of these efforts “will be a multi year journey,” he said. “But this project will set a critical foundation for making language based AI truly helpful for everyone.” A strong focus on responsible AI Following the AI announcements, which also included Marian Croak, VP of engineering at Google, and James Manyika, SVP at Google-Alphabet, discussed Google’s focus on responsible AI. “I think if we’re going to be leaders, it’s extremely important that we push the state of the art on responsible AI technology,” said Croak. “I’m passionate about wanting to discover ways to make things work in practice.” Google does adversarial testing “constantly and continuously,” she said. “Then we also make sure that we’re setting benchmarks set of quantitative and can be measured and verified across all the dimensions of our AI. So, we also do that on a continuous basis.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,509
2,023
"Denial of service vulnerability discovered in libraries used by GitHub and others | VentureBeat"
"https://venturebeat.com/security/denial-of-service-vulnerability-discovered-in-libraries-used-by-github-and-others"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Denial of service vulnerability discovered in libraries used by GitHub and others Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Unlike breaches targeting sensitive data or ransomware attacks, denial of service (DoS) exploits aim to take down services and make them wholly inaccessible. Several such attacks have occurred in recent memory; last June, for instance, Google blocked what at that point was the largest distributed denial of service (DDoS) attack in history. Akami then broke that record in September when it detected and mitigated an assault in Europe. In a recent development, Legit Security today announced its discovery of an easy-to-exploit DoS vulnerability in markdown libraries used by GitHub, GitLab and other applications, using a popular markdown rendering service called commonmarker. “Imagine taking down GitHub for some time,” said Liav Caspi, cofounder and CTO of the software supply chain security platform. “This could be a major global disruption and shut down most software development shops. The impact would likely be unprecedented.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! GitHub, which did not respond to requests for comment by VentureBeat, has posted a formal acknowledgement and fix. Denial of service aim: Disruption Both DoS and DDoS overload a server or web app with an aim to interrupt services. As Fortinet describes it, DoS does this by flooding a server with traffic and making a website or resource unavailable; DDoS uses multiple computers or machines to flood a targeted resource. And, there’s no question that they are on the rise — steeply, in fact. Cisco noted a 776% year-over-year growth in attacks of 100 to 400 gigabits per second between 2018 and 2019. The company estimates that the total number of DDoS attacks will double from 7.9 million in 2018 to 15.4 million this year. But although DDoS attacks aren’t always intended to score sensitive data or hefty ransom payouts, they nonetheless are costly. Per Gartner research, the average cost of IT downtime is $5,600 per minute. Depending on organization size, the cost of downtime can range from $140,000 to as much as $5 million per hour. And, with so many apps incorporating open-source code — a whopping 97% by one estimate — organizations don’t have full visibility of their security posture and potential gaps and vulnerabilities. Indeed, open-source libraries are “ubiquitous” in modern software development, said Caspi — so when vulnerabilities emerge, they can be very difficult to track due to uncontrolled copies of the original vulnerable code. When a library becomes popular and widespread, a vulnerability could potentially enable an attack on countless projects. “Those attacks can include disruption of critical business services,” said Caspi, “such as crippling the software supply chain and the ability to release new business applications.” Vulnerability uncovered As Caspi explained, markdown refers to creating formatted text using a plain text editor commonly found in software development tools and environments. A wide range of applications and projects implement these popular open-source markdown libraries, such as the popular variant found in GitHub’s implementation called GitHub Flavored Markdown ( GFM ). A copy of the vulnerable GFM implementation was found in commonmarker , the popular Ruby package implementing markdown support. (This has more than 1 million dependent repositories. ) Coined “MarkDownTime,” this allows an attacker to deploy a simple DoS attack that would shut down digital business services by disrupting application development pipelines, said Caspi. Legit Security researchers found that it was simple to trigger unbounded resource exhaustion leading to a DoS attack. Any product that can read and display markdown (*.md files) and uses a vulnerable library can be targeted, he explained. “In some cases, an attacker can continuously utilize this vulnerability to keep the service down until it is entirely blocked,” said Caspi. He explained that Legit Security’s research team was looking into vulnerabilities in GitHub and GitLab as part of its ongoing software supply chain security research. They have disclosed the security issue to the commonmarker maintainer, as well as to both GitHub and GitLab. “All of them have fixed the issues, but many more copies of this markdown implementation have been deployed and are in use,” said Caspi. As such, “precaution and mitigation measures should be employed.” Strong controls, visibility To protect themselves against this vulnerability, organizations should upgrade to a safer version of the markdown library and upgrade any vulnerable product like GitLab to the newest version, Caspi advised. And, generally speaking, when it comes to guarding against software supply chain attacks, organizations should have better security controls over the third-party software libraries they use. Protection also involves continuously checking for known vulnerabilities, then upgrading to safer versions. Also, the reputation and popularity of open-source software should be considered — in particular, avoid unmaintained or low-reputable software. And, always keep SDLC systems like GitLab up to date and securely configured, said Caspi. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,510
2,022
"Open-source code is everywhere; GitHub expands security tools to help secure it | VentureBeat"
"https://venturebeat.com/security/open-source-code-is-everywhere-github-expands-security-tools-to-help-secure-it"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Open-source code is everywhere; GitHub expands security tools to help secure it Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Whether directly or indirectly, nearly all organizations depend on software created by the open-source community. In fact, an incredible 97% of applications incorporate open-source code, and 90% of organizations say they are using it in some way. Still, as evidenced by Log4j and the SUNBURST/ SolarWinds attack (and many others), open source can be rife with security vulnerabilities. According to Gartner, 89% of companies experienced a supplier risk event in the past five years, and Argon Security reports that software supply chain attacks grew by more than 300% between 2020 and 2021. The work of the open-source community “is used in almost every software product, so securing it and protecting the community has a big impact,” said Mariam Sulakian, senior product manager at GitHub. “Vulnerabilities in open-source code can have a global ripple effect across the millions of people and services that rely on it.” The leading hosting service offers several capabilities to help address this problem, and today announced expansions to two of them: GitHub’s secret scanning alerts are now available for free on all public repositories, and its push protection feature is now offered for custom secret patterns. Both capabilities are now in public beta. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “As the largest open-source community in the world, GitHub is always working to make using and contributing to open source easier,” said Sulakian. “We give away our most advanced security tools for free on public repositories to help keep open source secure, and to keep those building it safe.” Keeping secrets safe Exposed secrets and credentials are the most common cause of data breaches, as they often go untracked. And, they can take an average of 327 days to identify. “Malicious actors often target leaked secrets and credentials as starting points for larger attacks, like ransomware and phishing campaigns,” said Sulakian. And, GitHub partners with more than 100 service providers to quickly remediate many exposed secrets through its secret scanning partner program. For instance, in 2022, the hosting service has detected and notified on more than 1.7 million exposed secrets across public repositories. Breaking that down to daily numbers, GitHub finds more than 4,500 potential secrets leaked in public repositories. Now, GitHub will empower open-source developers with these alerts too, and for free. Once enabled, GitHub directly notifies developers of leaked secrets in code. This enables them to easily track alerts, identify the leak’s source, and take action. For example, a user can receive an alert and track remediation for a leaked self-hosted HashiCorp Vault key, said Sulakian. “Secret scanning for public repositories will help millions of developers avoid exposing their credentials and passwords by accident,” she said. The gradual public beta rollout of secret scanning for public repositories began today and the feature should be available to all users by the end of January 2023. “With secret scanning, we found a ton of important things to address,” said David Ross, staff security engineer with Postmates. “On the appsec side, it’s often the best way for us to get visibility into issues in the code.” GitHub is pushing security forward Similarly, businesses often have their own unique set of secrets that they want to detect when exposed — and protect before exposure, Sulakian explained. With custom patterns, organizations scan for passwords in connection strings, private keys, and URLs that have embedded credentials (among other instances) across thousands of their repositories. “But remediation takes time and significant resources,” said Sulakian. To address this problem, GitHub introduced push protection to GitHub Advanced Security (GHAS) customers in April 2022. This capability seeks to proactively prevent leaks by scanning for secrets before they are committed. In the eight months since that release, GitHub has prevented more than 8,000 secret leaks across 100 secret types, said Sulakian. With the enhanced capabilities announced today, organizations with GHAS have additional coverage for what are often their most important secret patterns: Those customized and defined internally to their organizations. “With push protection, businesses can prevent accidental leaks of the most critical secrets,” said Sulakian. Immediate intel before pushing a secret out Push protection for custom patterns can be configured on a pattern-by-pattern basis at the organization or repository level, Sulakian explained. With the capability enabled, GitHub will enforce blocks when contributors try to push code that contains matches to the defined pattern. Organizations can decide what patterns to push-protect based on false positives. Integrating this capability into a developer’s flow saves time and helps educate on best practices, said David Florey, software engineering director at Intel. “If I attempt to push a secret, I immediately know it,” he said. The GitHub tool stops him before a secret is pushed into the codebase, he said; whereas, if he relied solely on external scanning tools to scan the repository after the secret’s already been exposed, “I’ll need to quickly revoke the secret and refactor my code.” Earlier detection, remediation With threat actors increasingly targeting leaked secrets and credentials, GitHub customers are investing more resources to secure their growingly complex software supply chain, said Sulakian. “Organizations constantly seek to detect and fix vulnerabilities earlier in the software lifecycle to improve overall security, save costs related to reactive work by appsec teams, and minimize damage,” said Sulakian. GitHub helps application security teams rapidly identify and remediate the vulnerabilities in users’ code, she said. The company has developed its tools, many of them free, to integrate directly into developer workflows to enable more secure, faster coding. Recently, it also introduced private vulnerability reporting to help organizations easily disclose vulnerabilities and communicate with maintainers. “Our philosophy is to make all our advanced security features available for free on public repositories,” said Sulakian. Ultimately, she maintained, “as the home for open source and 94-plus million developers, GitHub can advance the state of software security more than any other team or platform.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,511
2,023
"Achieving reliable generative AI  | VentureBeat"
"https://venturebeat.com/ai/achieving-reliable-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Achieving reliable generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The term “generative AI” has been all the buzz recently. Generative AI comes in several flavors, but common to all of them is the idea that the computer can automatically generate a lot of clever, useful content based on relatively little input from the user. If not something for nothing, at least a lot for a little. The initial recent excitement has been fueled by visual generative AI systems, such as DALL·E 2 and Stable Diffusion , in which the machine generates novel images based on brief textual descriptions. Want an image of “a donkey on the moon reading Tolstoy?” Voila! In a few seconds, you get a never-before-seen image of this well-read, well-traveled donkey. And then there’s the compelling value exchange – you input a few words and, in return, get a picture that’s worth a thousand. But this is misleading since it reinforces the image of the computer doing all the work. If indeed all you want is any aesthetic image of a lunary erudite donkey, chances are you’ll be satisfied with the output of the system; there are many such images, and the systems are good enough to be able to produce one of them. But as an artist, you have a more nuanced intent in mind, and at best, you’d use the generative system as an interactive tool to generate images based on many prompts you experiment with and are also likely to afterward massage the image yourself. This is even more striking in the case of textual generative AI , and here, of course, chatGPT has been all the rage. Here too, the promise is that the user jots down some key ideas, and the system takes over and does most of the writing. And indeed, systems such as chatGPT are impressive. They write poems, blog posts, emails, marketing copy, and the list goes on. The systems sometimes produce long-form text that’s surprisingly coherent, on message, and includes many correct and relevant facts not mentioned in the instructions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Except when they don’t. And often enough, they won’t. In practice, textual generative AI, when deployed without proper controls, generates as much wrong content as it does useful content. And “wrong” doesn’t mean “slightly off.” It means downright nonsensical. The internet is replete with such examples of chatGPT behavior; it will explain why 1000 is greater than 1062, will say it doesn’t know whether Lincoln and his assassin were on the same continent at the time of the assassination, will explain at length that the University of Alabama prohibited admitting black students in 1973 while Emory University never discriminated (both wrong), and claim that GPUs, CPUs, DNA computing and the abacus are increasingly more powerful for the purpose of deep learning. All in fluent, convincing prose. This is not a shortcoming specific to chatGPT; it is endemic to all current textual generative systems. Only a month ago, Meta unveiled Galactica , which claimed the ability to generate insightful scientific content and was taken down after two days when it became apparent that it was producing as much pseudo-science as it did credible scientific content. The brittleness of textual generative AI was recognized early on. When GPT-2 was introduced in 2019, columnist Tiernan Ray wrote, “[GPT-2 displays] flashes of brilliance mixed with […] gibberish.” And when a year later GPT-3 was released, my colleague Andrew Ng wrote, “Sometimes GPT-3 writes like a passable essayist, [but] it seems a lot like some public figures who pontificate confidently on topics they know little about.” This brittleness of current generative AI limits its impact in the real world. As a well-known publisher recently complained to me, the time his company saved by using a certain generative system was offset by the time it needed to spend fixing the nonsense it produced. To fully realize its potential, generative AI, especially the textual kind, must become more reliable. There are several technological developments that hold promise in this regard. One of them is increasing the degree to which the output is firmly anchored in trusted sources. By “firmly anchored,” I don’t mean merely being trained on trusted sources (which is already an issue in current systems), but in addition, that important parts of the output can be reliably traced back to the sources on which they were based. Current so-called “retrieval-augmented language models,” which access trusted sources to help guide the output of the neural network, point in a promising direction. Another key element is increasing the degree to which the systems exhibit basic common sense and reasoning and avoid egregious mistakes. Long-form text tells a story, and the story must have internal logic, be factually correct, and have a point. Current systems don’t have these properties, at least not reliably. The statistical nature of the neural networks, which power the current systems, makes the systems capable of producing cogent passages some of the time, but they inevitably fall off the cliff when pushed beyond a certain limit. They make blatant factual or logical errors and can easily veer off-topic. There are several strands of work aimed at mitigating this. They include purely neural approaches, such as so-called “prompt decomposition” and “hierarchical generation.” Other approaches follow the so-called “neuro-symbolic” direction, which augments the neural machinery with explicit symbolic reasoning. But I think the most important development is achieving what I call product-algo fit. The temptation to “get something for nothing” seduces people into not providing enough guidance to the generative systems and demanding an output that is too ambitious. Generative AI will never be perfect, and a good product manager understands the limitations of the underlying technology; she designs the product to compensate for those and in particular, crafts the best division of labor between the user and the machine. Galactica, as mentioned earlier, is actually an interesting engineering artifact. But asking it to reliably produce scientific papers is just too much. Generative AI needs more guidance — if you don’t know where you’re going, you’ll get there. If you don’t strongly care where you’re going – when any donkey on the moon or any generic birthday greeting to grandma will do – you’re on relatively safe ground. But if you’re writing a letter to your boss, your prized client, or your loved one, you want to get it just right, and for this, the system needs more guidance. The guidance can be given upfront, such as by an enriched set of prompts, but also interactively in the product itself. The jury is out on which combination of techniques will prove most useful, but I believe that the shortcomings of generative AI will be dramatically reduced. I also believe that this will happen sooner rather than later because of the enormous economic benefits of reliable generative AI. Does that mean the end of human writing? I don’t believe so. Certainly, some aspects of writing will be automated. Already today, we can’t live without spell-checking and grammar correction software; copy editing has been automated. But we still write, and I don’t think that will change. What will change is that, as we write, we’ll have built-in research assistants and editors (in the sense of a book editor, not the software artifact). These functions, which have been a luxury afforded by the very few, will be democratized. And that’s a good thing. Yoav Shoham is the co-founder and co-CEO of AI21 Labs. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,512
2,022
"How Microsoft could improve Copilot and ease open-source controversy | VentureBeat"
"https://venturebeat.com/programming-development/how-microsoft-could-improve-copilot-and-ease-open-source-controversy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How Microsoft could improve Copilot and ease open-source controversy Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. GitHub Copilot has been the subject of some controversy since Microsoft announced it in the Summer of 2021. Most recently, Microsoft has been sued by programmer and lawyer Matthew Butterick , who has alleged that GitHub’s Copilot violates the terms of open-source licenses and infringes the rights of programmers. Despite the lawsuit, my sense is that Copilot is likely here to stay in some form or another but it got me thinking: if developers are going to use an AI-assisted code generation tool, it would be more productive to think about how to improve it rather than fighting over its right to exist. Behind the Copilot controversy Copilot is a predictive code generator that relies on OpenAI Codex to suggest code — and entire functions — as coders compose their own code. It is much like the predictive text seen in Google Docs or Google Search functions. As you begin to compose a line of original code, Copilot suggests code to complete the line or fragment based on a stored repository of similar code and functions. You can choose to accept the suggestion or override it with your own, potentially saving time and effort. The controversy comes from Copilot deriving its suggestions from a vast training set of open-source code that it has processed. The idea of monetizing the work of open-source software contributors without attribution has irked many in the GitHub community. It has even resulted in a call for the open-source community to abandon GitHub. There are valid arguments for both sides of this controversy. The developers who freely shared their original ideas likely did not intend it to end up packaged and monetized. On the other hand, it could be argued that what Microsoft has monetized is not the code but the AI technology for applying that code in a suitable context. Anyone with a free GitHub account can access the code, copy it and use it in their own projects — without attribution. In this regard, Microsoft isn’t using the code any differently from how it has been used all along. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Taking Copilot to the next level As someone who has used Copilot and observed how it saves time and increases productivity, I see an opportunity for Microsoft to improve Copilot and address some of the complaints coming from its detractors. What would enhance the next generation of Copilot is a greater sense of context for its suggestions. To make usable recommendations, Copilot could base them on more than a simple GitHub search. The suggestions could work in the specific context of the code being written. There must be some significant AI technology at work behind the suggestions. This is both the unique value of Copilot and the key to improving it. Software programmers want to know where the suggestions come from before accepting them, and to understand that the code is a fit for their specific purposes. The last thing we want is to use suggested code that works enough to run when compiled, but is inefficient, or worse, prone to failure or security risks. By providing more context to its Copilot suggestions, Microsoft could give the coder the confidence to accept them. It would be great to see Microsoft offer a peek into the origin of the suggested code. A trail back to the original source — including some attribution — would achieve this, and also share some of the credit that is due. Just knowing there is a window into the original open-source repository could bring some calm to the open-source community, and would also help Copilot users make better coding decisions as they work. I was pleased to see Microsoft reaching out to the community recently to understand how to build trust in AI-assisted tooling, and I am looking forward to seeing the results of that effort. As I said, it is hard to imagine that GitHub Copilot is going to go away merely because a portion of its community is upset with Microsoft’s repackaging of their work behind a paywall. But Microsoft would have everything to gain by extending a digital olive branch to the open-source community — while at the same time improving its product’s effectiveness. Coty Rosenblath is CTO at Katalon. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,513
2,023
"Databricks debuts ChatGPT-like Dolly, a clone any enterprise can own | VentureBeat"
"https://venturebeat.com/ai/databricks-debuts-chatgpt-like-dolly-a-clone-any-enterprise-can-own"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Databricks debuts ChatGPT-like Dolly, a clone any enterprise can own Share on Facebook Share on X Share on LinkedIn Image by Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Was data lakehouse platform Databricks becoming an OpenAI rival on anyone’s 2023 bingo card? Well, hello, Dolly. Today, in an effort the company says is meant to build on its longtime mission to democratize AI for the enterprise, Databricks released the code for an open-source large language model (LLM) called Dolly — named after Dolly the sheep , the first cloned mammal — that it said companies can use to create instruction-following chatbots similar to ChatGPT. The model can be trained, the company explained in a blog post , on very little data and in very little time. “With 30 bucks, one server and three hours, we’re able to teach [Dolly] to start doing human-level interactivity,” said Databricks CEO Ali Ghodsi. There are many reasons a company would prefer to build its own LLM model rather than sending data to a centralized LLM provider that serves a proprietary model behind an API , the blog post explained. Handing sensitive data over to a third party may not be an option, and organizations may have specific needs for model quality, cost and desired behavior. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We believe that most ML users are best served long term by directly owning their models,” said the blog post. Databricks found ChatGPT-like qualities don’t require latest or largest LLM According to the announcement, Dolly is meant to show that anyone “can take a dated off-the-shelf open-source large language model and give it magical ChatGPT-like instruction.” Surprisingly, it said, instruction-following does not seem to require the latest or largest models — Dolly is only 6 billion parameters, compared to 175 billion for GPT-3. “We’ve been calling ourselves a data and AI company since 2013, and we have close to 1,000 customers that have been using some kind of large language model on Databricks,” said Ghodsi, who told VentureBeat he was “blown away” when ChatGPT was launched at the end of November 2022, but realized only a few companies on the planet have the massive language models necessary for ChatGPT-level ability. “Most people were thinking, do we have to all leverage these proprietary models that these very few companies have? And if so, do we have to give them our data?” he said. The answer to both of those questions is no: In February, Meta released the weights for a set of high-quality (but not instruction-following) language models called LLaMA , trained for over 80,000 GPU-hours each, to academic researchers. Then, in March, Stanford built the Alpaca model, which was based on LLaMA, but tuned on a small dataset of 50,000 human-like questions and answers that, surprisingly, made it exhibit ChatGPT-like interactivity. Inspired by those two options, Databricks was able to take an existing open-source 6-billion-parameter model from EleutherAI and slightly modify it to elicit instruction-following capabilities such as brainstorming and text generation not present in the original model, using data from Alpaca. Surprisingly, the modified model worked very well. According to the blog post, this suggests that “much of the qualitative gains in state-of-the-art models like ChatGPT may owe to focused corpuses of instruction-following training data, rather than larger or better-tuned base models.” LLM models will not be in the hands of only a few companies Ghodsi said that going forward there will many more LLM models that will become cheaper and cheaper — and won’t be in the hands of only a few companies. “Every organization on the planet will probably utilize these,” he said. “Our belief is that in every industry, the winning, leading companies will be data and AI companies that will be leveraging this kind of technology and will have these kinds of models.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,514
2,022
"Stanford debuts first AI benchmark to help understand LLMs | VentureBeat"
"https://venturebeat.com/ai/stanford-debuts-first-ai-benchmark-to-help-understand-llms"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Stanford debuts first AI benchmark to help understand LLMs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In the world of artificial intelligence ( AI ) and machine learning ( ML ), 2022 has arguably been the year of foundation models , or AI models trained on a massive scale. From GPT-3 to DALL-E , from BLOOM to Imagen — another day, it seems, another large language model (LLM) or text-to-image model. But until now, there have been no AI benchmarks to provide a standardized way to evaluate these models, which have developed at a rapidly-accelerated pace over the past couple of years. >>Don’t miss our special issue: Zero trust: The new security paradigm. << LLMs have particularly captivated the AI community, but according to the Stanford Institute for Human-Centered AI (HAI)’s Center for Research on Foundation Models, the absence of an evaluation standard has compromised the community’s ability to understand these models, as well as their capabilities and risks. To that end, today the CRFM announced the Holistic Evaluation of Language Models (HELM), which it says is the first benchmarking project aimed at improving the transparency of language models and the broader category of foundation models. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Historically, benchmarks have pushed the community to rally around a set of problems that the research community believes are valuable,” Percy Liang, associate professor in computer science at Stanford University and director of the CRFM, told VentureBeat. “One of the challenges with language models, and foundation models in general, is that they’re multipurpose, which makes benchmarking extremely difficult.” HELM, he explained, takes a holistic approach to the problem by evaluating language models based on a recognition of the limitations of models; on multi-metric measurement; and direct model comparison, with a goal of transparency. The core tenets used in HELM for model evaluation include accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency, pointing to the key elements that make a model sufficient. Liang and his team evaluated 30 language models from 12 organizations: AI21 Labs, Anthropic, BigScience, Cohere, EleutherAI, Google, Meta, Microsoft, NVIDIA, OpenAI, Tsinghua University and Yandex. Some of these models are open-source to the public, others are available through commercial APIs, and others are private. A ‘comprehensive approach’ to LLM evaluation “I applaud the Stanford group’s initiative,” Eric Horvitz, chief scientific officer at Microsoft, told VentureBeat by email. “They have taken a comprehensive approach to evaluating language models by creating a taxonomy of scenarios and measuring multiple aspects of performance across them.” Benchmarking neural language models is crucial for directing innovation and progress in both industry and academia, he added. “Evaluation is essential for advancing the science and engineering of neural models, as well as for assessing their strengths and limitations,” he said. “We conduct rigorous benchmarking on our models at Microsoft, and we welcome the Stanford team’s comparative evaluation within their holistic framework, which further enriches our knowledge and insights.” Stanford’s AI benchmark lays foundation for LLM standards Liang says HELM lays the foundation for a new set of industry standards and will be maintained and updated as an ongoing community effort. “It’s a living benchmark that is not going to be done, there are things that we’re missing and that we need to cover as a community,” he said. “This is really a dynamic process, so part of the challenge will be to maintain this benchmark over time.” Many of the choices and ideas in HELM can serve as a basis for further discussion and improvement, agreed Horvitz. “Moving forward, I hope to see a community-wide process for refining and expanding the ideas and methods introduced by the Stanford team,” he said. “There’s an opportunity to involve stakeholders from academia, industry, civil society, and government—and to extend the evaluation to new scenarios, such as interactive AI applications, where we seek to measure how well AI can empower people at work and in their daily lives.” AI benchmarking project is a ‘dynamic’ process Liang emphasized that the benchmarking project is a “dynamic” process. “When I tell you about the results, tomorrow they could change because new models are possibly coming out,” he said. One of the main things that the benchmark seeks to do, he added, is capture the differences between the models. When this reporter suggested it seemed a bit like a Consumer Reports analysis of different car models, he said that “is actually a great analogy — it is trying to provide consumers or users or the public in general with information about the various products, in this case models.” What is unique here, he added, is the pace of change. “Instead of being a year, it might be a month before things change,” he said, pointing to Galactica , Meta’s newly released language model for scientific papers, as an example. “This is something that will add to our benchmark,” he said. “So it’s like having Toyota putting out a new model every month instead of every year.” Another difference, of course, is the fact that LLMs are poorly understood and have such a “vast surface area of use cases,” as opposed to a car that is only driven. In addition, the automobile industry has a variety of standards — something that the CRFM is trying to build. “But we’re still very early in this process,” Liang said. HELM AI benchmark is a ‘Herculean’ task “I commend Percy and his team for taking on this Herculean task,” Yoav Shoham, co-founder at AI21 Labs, told VentureBeat by email. “It’s important that a neutral, scientifically-inclined [organization] undertake it.” The HELM benchmark should be evergreen, he added, and updated on a regular basis. “This is for two reasons,” he said. “One of the challenges is that it’s a fast-moving target and in many cases the models tested are out of date. For example, J1-Jumbo v1 is a year-old and J1-Grande v1 is 6-months-old, and both have newer versions that haven’t been ready for testing by a third-party.” Also, what to test models for is notoriously difficult, he added. “General considerations such as perplexity (which is objectively defined) or bias (which has a subjective component) are certainly relevant, but the set of yardsticks will also evolve, as we understand better what actually matters in practice,” he said. “I expect future versions of the document to refine and expand these measurements.” Shoham sent one parting note to Liang about the HELM benchmark: “Percy, no good deed goes unpunished,” he joked. “You’re stuck with it.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,515
2,022
"Why AI leaders need a 'backbone' of large language models | VentureBeat"
"https://venturebeat.com/ai/why-ai-leaders-need-a-backbone-of-large-language-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why AI leaders need a ‘backbone’ of large language models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI adoption may be steadily rising, but a closer examination shows that most enterprise companies may not be quite ready for the big time when it comes to artificial intelligence. Recent data from Palo Alto, California-based AI unicorn SambaNova Systems , for example, shows that more than two-thirds of organizations think using artificial intelligence (AI) will cut costs by automating processes and using employees more efficiently. But only 18% are rolling out large-scale, enterprise-class AI initiatives. The rest are introducing AI individually across multiple programs, rather than risking an investment in big-picture, large-scale adoption. That will create an increasing amount of distance between companies that are AI leaders and innovators and those that fall behind, said Marshall Choy, senior vice president of product at SambaNova, which offers custom-built dataflow-as-a-service (and won VentureBeat’s AI Innovation Award for Edge AI in 2021). Companies that are more mature in AI and able to invest in large-scale adoption will reap the rewards, he told VentureBeat, while the ones introducing AI across multiple programs will suffer from information and insight silos. “We see time and time again that leaders need to have a holistic view across their organization.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI is going to transform industries, segments and organizations as dramatically as the internet did, Choy explained. Today’s AI innovators are laying down a unified AI ‘backbone’ of large language models (LLMs) for natural language processing (NLP), which will serve as the foundation for the next 5-10 years of application and deployment in their organizations. “We’re seeing that with those taking a leadership position – it started with the hyperscale, cloud services providers who have done this at massive scale,” he said. “Now, it’s the banks, the energy companies, the pharmaceutical companies, the national laboratories.” Soon, he said, it’s going to be “unheard of” for enterprises not to have an LLM-based AI “backbone.” “The long-term benefit will be to start building out what organizations need to get where they want to be by doing it [all] now, rather than piecing it all together and then having to do a redo in a couple of years,” Choy said. The AI maturity curve predicts enterprise-scale adoption Many organizations are early in the AI maturity curve, which typically means they are self-educating, experimenting and doing pilots to try to determine the right use cases for AI. “I think those folks are a long way away from enterprise-scale adoption, if they don’t even know what the use cases are,” said Choy. But there are many organizations that are further along, deploying AI for departmental use and beginning to reach a maturity stage. “They’ve got architectural and data maturity, they’re starting to standardize on platforms, they have budgets,” he said. Still, the organizations thinking big and rolling out large-scale projects tend to be in industries like banking, which may have hundreds or thousands of disparate AI models running across the enterprise. Now that foundation models based on tools like GPT-3 are feasible, these organizations can make the kind of big-picture AI investment they need to truly transform their business and provide more customized services for their end users. “It’s almost like a do-over for them – they would have devised this as a strategy three years ago, had the technology been available,” he said. “The banking industry is at the stage where there’s a recognition that AI is going to be the accelerant for the next transitional shift for the enterprise.” Other industries may look to AI for tactical efforts, including cost optimization and gaining more efficiencies. But the ones that are truly reforming and reshaping themselves to create new products and services — and therefore new revenue streams and lines of business – those are the industries that will need that foundational AI “backbone,” Choy added. Advances in language models make ‘backbone’ possible Mature AI organizations are gravitating their deep learning efforts to LLMs and language processing. “Inherent in that application is document, text and speech-heavy industries like banking, insurance, some areas of manufacturing like warehousing and logistics,” said Choy. “I think in a few short years, no industry will be untouched because language is effectively the connector to everything we do.” What’s making this all possible now, he added, is the advances in the language models themselves. “The magic of these new, large language models, like our own GPT banking model, is their generative capabilities,” he said. “From auto-summarization from a voice-ready meeting transcript, for example, or robotic claims, processing and completion, this generative quality takes it to the next level with regard to language – it’s a huge step forward for both front-office customer service-oriented tasks, and also back-office stuff like risk and compliance.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,516
2,023
"Content collaboration is key — so is protecting your enterprise from its threats | VentureBeat"
"https://venturebeat.com/security/protecting-your-enterprise-in-the-age-of-content-collaboration"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Content collaboration is key — so is protecting your enterprise from its threats Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Remember the days when workplace collaboration meant everyone was sitting in a conference room handing out printed documents and sharing presentations via a projector? We’ve come a long way since then. Digital content collaboration platforms dominate the market: Slack, Trello, Monday, Salesforce and others have become household names. These tools have quickened the pace of business operations by allowing content and work to become more accessible to colleagues, partners and customers from wherever they are, on whatever device at any given moment. Content collaboration platforms are a great resource and have had an immense and positive impact on productivity. But it’s no secret that even the most prominent software is not immune from being used for malicious purposes. The cyber threats associated with content collaboration software are often more unique and difficult to detect than email-based threats — we’re leveling up from the days of bad grammar and spelling and asking for gift cards! Hackers are stepping up their game and these aren’t the same types of threats entering your email inbox (or being directly filtered to your spam folder). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Malicious content comes in different forms Would it surprise you if I told you one of the most shared file types in collaboration software is gifs ?. Enabling the use of gifs can enhance user experience, and arguably one of the most popular is Giphy. It has been so ingrained in our brains that malicious content comes in via the form of Word docs, Excel spreadsheets or PDFs that we don’t consider innocent-looking images and .gif files to be harmless, but they certainly can be. It is important to realize how most of today’s attacks occur. For starters, many attacks begin with compromised legitimate credentials , meaning you cannot always trust that the person you are communicating with is who they say they are. Let’s say Becca, your colleague in the marketing department, has her Slack account hijacked. The threat actor is scrolling through her Slack direct messages and sees that you share gifs with her daily. All the attacker has to do is find a standard gif and embed malicious code deep within a pixel — this is not very complex and is also fairly inexpensive. You, the recipient, would think nothing of this and would click on the image and open the organization up to a full network exposure and attack. Compromising one individual can lead to lateral network movements that jeopardize the entire organization. The error is not on the end user or the collaboration platform. These attacks are too advanced for the everyday user to detect and collaboration software is often not equipped with the necessary security features to thwart these threats. On the other hand, within some collaboration software such as Google Drive, videos (or animations) of a certain size do not play natively. Users would have to download the file — before they even have an idea of what the clip is — which could potentially trigger a payload if the file was malicious. The same goes for zip files, especially those that require a password to open. Users don’t always know the contents they’re unpackaging, leaving threat actors the ability to bury malicious code deep within files being shared. Do you really know what your security posture looks like? We all know that any workplace software has limitations in regards to the protection it provides, and that the onus is on the enterprise to integrate the proper guardrails. But that doesn’t stop many of us from having implicit trust in these platforms. The truth of the matter is that while these platforms do offer some level of security measure, not all of them are able to offer the advanced security measures necessary to prevent unknown threats, leaving gaps that hackers can learn to evade. For example, many large enterprise-grade content collaboration platforms only use an antivirus program to prevent malicious content from being uploaded and shared among users. That may seem like a positive feature until you realize that antivirus programs cannot catch zero-day threats. The proliferation of zero-day exploits make this a very prominent gap in security protection. Secondly, often patches or updates issued by the collaboration software need to be installed by the organization. It’s rarely an automated fix, and if automatic fixes are available, you cannot always trust that it works — this has been a previous complaint with major collaboration platforms. The frequency of patches and updates can be overwhelming (Slack for Windows has already issued several updates in 2023 alone.) Sure, many of these updates are minor bug fixes, but some are significantly more dangerous, like the recent Microsoft Teams vulnerability that takes advantage of Microsoft’s default configuration to reach employees and deliver malware. Sometimes, you cannot afford to let a patch or update sit for an extended period of time. Considering more than just productivity Collaboration software is a valuable tool when used securely. I am a huge advocate for finding avenues to accelerate productivity, but not at the hands of security. As a technologist, my areas of expertise span across the product lifecycle and in my prior roles I have always focused on building software with security and usability as a priority. So, before your organization fully embraces content collaboration platforms, I urge security leaders to consider the following: A healthy dose of ‘fear ’: We’ve reached a point where users are conditioned to be careful with email — time has proven this — but collaboration tools are not treated the same. I would never want users to be scared of interacting with content in collaboration software, but currently there is an overwhelming, and dangerous, assumption that these environments are inherently safe. With the purpose of collaboration in mind, we need to recognize the fact that there is work to be done and extra steps need to be taken to keep these areas safe from malware and harm. Extending security awareness : Remember, some attackers can fool users into believing they are an internal user and pass along malicious content. Educating and training employees on what these threats may look like, and providing general best practices for using collaboration tools safely is beneficial to all parties. These are high-level considerations, and as your organization continues to embrace and scale its usage of collaboration tools, dig deeper into security mechanisms and bolster your defenses. Aviv Grafi is founder and CTO of Votiro. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,517
2,023
"Stable Diffusion AI art lawsuit, plus caution from OpenAI, DeepMind | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/stable-diffusion-lawsuit-plus-words-of-caution-from-openai-deepmind-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Stable Diffusion AI art lawsuit, plus caution from OpenAI, DeepMind | The AI Beat Share on Facebook Share on X Share on LinkedIn Sharon Goldman/Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Back in October, I spoke to experts who predicted that legal battles over AI art and copyright infringement could drag on for years, potentially even going as far as the Supreme Court. Those battles officially began this past Friday, as the first class-action copyright infringement lawsuit around AI art was filed against two companies focused on open-source generative AI art — Stability AI (which developed Stable Diffusion) and Midjourney — as well as DeviantArt, an online art community. Artists claim AI models produce ‘derivative works’ Three artists launched the lawsuit through the Joseph Saveri Law Firm and lawyer and designer/programmer Matthew Butterick, who recently teamed up to file a similar lawsuit against Microsoft, GitHub and OpenAI, related to the generative AI programming model Copilot. The artists claim that Stable Diffusion and Midjourney scraped the internet to copy billions of works without permission, including theirs, which then are used to produce “derivative works.” In a blog post , Butterick described Stable Diffusion as a “par­a­site that, if allowed to pro­lif­er­ate, will cause irrepara­ble harm to artists, now and in the future.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Stability AI CEO Emad Mostaque told VentureBeat that the company — which last month said it would honor artist requests to opt-out of future Stable Diffusion training — has “not received anything to date” regarding the lawsuit and “once we do we can review it.” OpenAI’s Sam Altman and DeepMind’s Demis Hassabis signal caution I’ll be following up on this lawsuit with a more detailed piece — but thought it was interesting that the news arrives as both OpenAI (who released DALL-E 2 and ChatGPT to immense hype) and DeepMind (which has stayed away from publicly releasing creative AI models) expressed caution regarding the future of generative AI. In a Time magazine interview last week, DeepMind CEO Hassabis said, “When it comes to very powerful technologies — and obviously AI is going to be one of the most powerful ever — we need to be careful. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” In urging his competitors to proceed cautiously, he said “I would advocate not moving fast and breaking things.” Meanwhile, as recently as a year ago, OpenAI CEO Sam Altman encouraged speed, tweeting “Move faster. Slowness anywhere justifies slowness everywhere.” But last week he sang a different tune, according to Reuters reporter Krystal Hu, who tweeted : “@sama said OpenAI’s GPT-4 will launch only when they can do it safely & responsibly. ‘In general we are going to release technology much more slowly than people would like. We’re going to sit on it for much longer…'” [Update: Of course, that doesn’t mean OpenAI is really slowing down. Tonight, in fact, the company announced that “we’ve learned a lot from the ChatGPT research preview” and that ChatGPT will also be coming to its API soon.] Generative AI can turn ‘from foe to friend’ Debates around generative AI — whether in lawsuits, magazine articles or tweets — are certainly only beginning. But the time for these conversations is now, according to the World Economic Forum, which released an article yesterday on the topic tied to its annual meeting currently happening in Davos, Switzerland. “Just as many have advocated for the importance of diverse data and engineers in the AI industry, so must we bring in expertise from psychology, government, cybersecurity and business to the AI conversation,” the article said. “It will take open discussion and shared perspectives between cybersecurity leaders, AI developers, practitioners, business leaders, elected officials and citizens to determine a plan for thoughtful regulation of generative AI. All voices must be heard. Together, we can surely tackle this threat to public safety, critical infrastructure and our world. We can turn generative AI from foe to friend.” Updated by author 1/16 11 pm ET: Added tweet from OpenAI about ChatGPT. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,518
2,022
"What is artificial intelligence classification? | VentureBeat"
"https://venturebeat.com/2022/06/16/what-is-artificial-intelligence-classification"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is artificial intelligence classification? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Table of contents What are the types of classification algorithms used in artificial intelligence? How are the classification algorithms trained? What are some of the best-known classification algorithms? How are the major companies attacking classification systems with artificial intelligence? How are startups approaching artificial intelligence classification? Is there anything that artificial intelligence can’t classify? The first job for many artificial intelligence (AI) algorithms is to examine the data and find the best classification. An autonomous car, for example, may take an image of a street sign; the classification algorithm must interpret the street sign by reading any words and comparing it to a list of known shapes and sizes. A phone must listen to a sound and determine whether it is one of its wake-up commands (“Alexa,” “Siri,” “Hey Google”). The job of classification is sometimes the ultimate goal of an algorithm. Many data scientists use AI algorithms to preprocess their data and assign categories. Simply observing the world and recording what is happening is often the main job. Security cameras, for example, are now programmed to detect certain activity that might be suspicious. In many cases, the classification is just the first step of a larger algorithm. The autonomous car will use the classification of a street sign to make decisions about stopping or turning. A smart vacuum cleaner may watch for pets or children, and it’ll turn off or shut down if one is detected. What are the types of classification algorithms used in artificial intelligence? There is a wide range of algorithms that vary between general approaches able to train themselves to answer any type of question and also focused applications that work on particular domains. For example, optical character recognition algorithms are used to convert paper scans into digital documents by classifying each letter in the image. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Other algorithms are designed to work with numerical data. They may divide the range of potential answers into sections representing each possible potential answer. A simple algorithm for classifying pets as either dogs or hamsters may be successful, examining the weight alone. Any pet weighing more than one pound would be classified as a dog and anyone weighing less than a pound would be classified as a hamster. Other algorithms are more elaborate and rely upon multi-stage models with elaborate feedback loops. Some machine learning algorithms simulate networks of neurons and they often have thousands, millions or even billions of simulated neurons in them. Each simulated neuron is tuned individually to react to the data and produce an answer. These answers from individual neurons are often fed into another stage of simulated neurons, and the entire network produces the classification as the individual answers flow through the network. [Related: This AI attorney says companies need a chief AI officer — pronto ] How are the classification algorithms trained? Some simple models for the classification can be trained or programmed by a human who understands the domain. The example above of the algorithm that can determine whether a pet is a dog or a hamster is very simple and the human’s domain knowledge is easy to transfer to the model. However, most machine learning algorithms aren’t as simple and training them requires running another algorithm. It is common for machine learning scientists to create a training subset of the data. This is fed into the training algorithm, with searches for the best parameters and settings for the parts of the model. In our simple example of distinguishing between dogs and hamsters, the threshold of one pound is the only parameter in that model. In practice, many machine learning algorithms set millions or even billions of parameters in the process of training. A common step in the process is to set aside some subset of the initial training dataset to evaluate the quality of the results. This data is kept separate from the training process as a control group. When the model is tested on the segregated data, there’s no danger that some unforeseen bias crept into the model. In addition, some projects require careful pre-classification and data cleansing, which is sometimes called “embedding.” This standardizes the data and introduces a simple structure that can simplify the process. Some numbers, for instance, may be rounded off. Some words may be converted to all capital letters. Occasionally, a separate classification algorithm is used to perform this step. What are some of the best-known classification algorithms? The classification algorithms used in AI are a mixture of statistical analysis and algebra, arranged in flowcharts and decision trees. Some approaches predate the idea of creating machine intelligence, emerging from a field of statistics, calculus and numerical analysis. Many artificial intelligence models use a combination of different approaches and algorithms. Indeed, the choice of algorithm can be a bit of an art. Scientists have a feel for which approaches may work best and they may try numerous combinations until they find a predictive solution. Some of the best known approaches are: Simple regression: Several good techniques can fit a line or a polynomial to a set of data points. Minimizing the square of the distance is a common technique. Once this line is drawn, a threshold may be set and the possible outcomes from classification are mapped to portions of the line. Logistic regression: This also uses curve fitting techniques but with more complex curves, often sigmoid functions. The large jump in the sigmoid can be adjusted to provide a good threshold between the classification options. Bayesian: Another option is to use bell curves, often called Bayesian functions, to match the data. This works well for clusters. Several bell curves can fit several different clusters and the best threshold can be set by their intersections. Support vector machines: This is similar to fitting a line but extends it into multiple dimensions. A plane or collection of planes is positioned to maximize the distance from all the points. These planes become the threshold separating the space. Decision tree: Some problems are complex enough that a single regression or threshold isn’t effective. A decision tree creates a flowchart or tree with multiple decisions at each step. In many cases, different variables are used at each step. The process is best for complex datasets where different variables behave very differently, such as when some variables are Boolean and others numerical. Random forest: Finding the best collection of decisions for the best tree can be difficult because the possible options increase quickly with the complexity of the data set. The random forest builds many potential trees and tests them all. Nearest neighbor: Instead of cutting up a data set with lines or planes, the nearest neighbor approach looks for definitive points in the space. New data points are classified by finding the nearest definitive point in the space. In some cases, the algorithms find a set of weights for the various data fields to adjust how the distance is calculated. Neural networks: These are more elaborate AI algorithms that simulate collections of neurons that are arranged in a network. Each neuron can make a simple decision based upon its inputs. The decisions flow through the network until a final classification is made. How are the major companies attacking classification systems with artificial intelligence? All of the major cloud companies maintain strong programs in developing and marketing artificial intelligence applications. Each can easily tackle classification problems using their built-in algorithms. Helping customers sort through and label data is one of the first and best applications for their AI tools. Amazon’s SageMaker, for example, supports many of the best classification algorithms, including nearest neighbor and regression. Its documentation includes a variety of examples for labeling text and image data using all the possible algorithms. The models can also be deployed with many products, such as DeepLens , a fully programmable video camera that can handle some classification problems internally. Google’s AI tools like VertexAI can all be applied directly to labeling data. The AutoML tool includes a number of predefined and automated procedures for classifying image or textual data. There are also several specialized tools and APIs designed for some of the most important use cases. The Cloud Data Loss Prevention tool is optimized for detecting sensitive personal information and then obscuring it. The Cloud Natural Language API has several pretrained models for tasks like analyzing sentiment or classifying content. Microsoft’s Azure offers a wide range of tools that start with supporting basic experimentation and end with pre-built applications for important common tasks. The early work is supported with Jupyter notebooks, which have a drag-and-drop interface. The Azure Applied AI Services have tools that optimize jobs like form recognition and digitization , video analysis for jobs like improving safety through surveillance and the Metrics Analyzer for tracking anomalies in log files. IBM’s products support classification through data science platforms like SPSS and pure AI algorithms. After basic experimentation and exploration, IBM also supports a number of focused tools like the Security Discovery and Classify tool, which can help button down websites and prevent data loss. The Watson Natural Language Understanding tool now includes a feature for creating classification models for text with just a few steps. Oracle’s product line also includes a wide mixture of tools for basic experimentation, as well as focused systems that tackle particular chores. The Human Capital Management tool in their cloud supports HR departments and offers some AI-based features for classifying employees according to their skills with a Skills Engine and a Skills Nexus. The AI Services have many prebuilt models for analyzing speech, text and imagery. How are startups approaching artificial intelligence classification? Startup companies that are solving the problems of classification with artificial intelligence algorithms are also targeting a wide range of markets. Some want to build basic tools that researchers, data scientists and enterprises can deploy. They’re exploring some of the most novel approaches and avenues. Many companies are also applying the algorithms directly to specific niches or applications. They’re focusing on adapting the approaches to the particular idiosyncrasies of the domain by customizing the data collection, cleansing and embedding into a training set. Many of these don’t sell themselves as artificial intelligence companies, even though much of the value they create comes from the algorithms. Affirm , for instance, is a fintech firm offering loans to shoppers. Its “Debit+” card offers low 0% APR loans for particular items at sponsoring stores like Lowe’s or Peloton. Other purchases are cleared like normal debit transactions. The AI algorithms work in the background to classify the customers and their purchases. Clarifai offers a wide range of powerful low-code and no-code classification pipelines for processing text, audio, imagery and video. The Flare Edge tool, for instance, is designed to deploy the classification models to cameras and sensors throughout the internet to speed classification by eliminating the need to ship imagery to a data center. Symbl AI works with unstructured text and audio to detect conversational topics and classify them according to tone and intent. It integrates with video, telephony, text and streaming sources. Vectra AI analyzes networks on premises and in data centers to classify threats and identify potential security holes. It watches for dangerous activity like large-scale data exfiltration or encryption to identify the most dangerous threats. Is there anything that artificial intelligence can’t classify? Scientists have a wide range of possible classification functions, and they can often find a good match given enough training data. The problems often appear later, when the new data forms a different pattern from the original training data. Even small changes can be significant because sometimes the models are sensitive to tiny shifts in values. Some implementations deliberately use a feedback mechanism to retrain the model over time. It’s important to note that problems can arise when the data set includes inadvertent patterns. A common difficulty with visual datasets comes from the lighting of the subjects. If a training set is filled with photos taken inside, it may not perform correctly when the new images come from outside or at dusk, for example. Eliminating these subtle differences can be a challenge because humans may not be aware of them. Assembling larger and larger training sets is a common approach to try to ensure that all possible combinations are reflected in the data set. Other problems can arise when the sensors detect very subtle differences that aren’t obvious to the scientists. For example, human skin often becomes slightly redder during the moments when blood is being pumped through them. Some use a camera alone to sense and measure someone’s pulse. The amount of this flush, though, is rarely enough for human eyes to see. Well-functioning machine learning algorithms can point out subtle differences like this to the human, but sometimes the human discards it as noise. Read more: Does cognitive computing offer the next wave of analytics beyond data science? VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,519
2,023
"Hugging Face reveals generative AI performance gains with Intel hardware | VentureBeat"
"https://venturebeat.com/ai/hugging-face-reveals-generative-ai-performance-gains-with-intel-hardware"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hugging Face reveals generative AI performance gains with Intel hardware Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nvidia’s A100 GPU accelerator has enabled groundbreaking innovations in generative AI , powering cutting-edge research that is reshaping what artificial intelligence can achieve. But in the fiercely competitive field of AI hardware, others are vying for a piece of the action. Intel is betting that its latest data center technologies — including a new Intel Xeon 4th generation Sapphire Rapids CPU and an AI-optimized Habana Gaudi2 GPU — can provide an alternative platform for machine learning training and inference. On Tuesday, Hugging Face , an open-source machine learning organization, released a series of new reports showing that Intel’s hardware delivered substantial performance gains for training and running machine learning models. The results suggest that Intel’s chips could pose a serious challenge to Nvidia’s dominance in AI computing. The Hugging Face data reported that the Intel Habana Gaudi2 was able to run inference 20% faster on the 176 billion-parameter BLOOMZ model than it could on the Nvidia A100-80G. BLOOMZ is a variant of BLOOM (an acronym for BigScience Large Open-science Open-access Multilingual Language Model), which had its first big release in 2022 providing support for 46 different human languages. Going a step further, Hugging Face reported that the smaller 7 billion-parameter version of BLOOMZ will run three times faster than the A100-80G, running on the Intel Habana Gaudi2. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! On the CPU side, Hugging Face is publishing data showing the increase in performance for the latest 4th Generation Intel Xeon CPU in comparison to the prior 3rd generation version. According to Hugging Face, Stability AI’s Stable Diffusion text-to-image generative AI model runs 3.8 times faster without any code changes. With some modification, including the use of the Intel Extension for PyTorch with Bfloat16, a custom format for machine learning, Hugging Face said it was able to get nearly a 6.5-times speed improvement. Hugging Face has posted an online demonstration tool to allow anyone to experience the speed difference. “Over 200,000 people come to the Hugging Face Hub every day to try models, so being able to offer fast inference for all models is super important,” Hugging Face product director Jeff Boudier told VentureBeat. “Intel Xeon-based instances allow us to serve them efficiently and at scale.” Of note, the new Hugging Face performance claims for Intel hardware did not do a comparison against the newer Nvidia H100 Hopper-based GPUs. The H100 has only recently become available to organizations like Hugging Face, which, Boudier said, has been able to do only limited testing thus far with it. Intel’s strategy for generative AI is end-to-end Intel has a focussed strategy for growing the use of its hardware in the generative AI space. It’s a strategy that involves both training and inference, not just for the biggest large language models (LLMs) but also for real use cases, from the cloud to the edge. “If you look at this generative AI space, it’s still in the early stages and it has gained a lot of hype with ChatGPT in the last few months,” Kavitha Prasad, Intel’s VP and GM datacenter, AI and cloud, execution and strategy, told VentureBeat. “But the key thing is now taking that and translating it into business outcomes, which is still a journey that’s to be had.” Prasad emphasized that an important part of Intel’s strategy for AI adoption is enabling a “build once and deploy everywhere” concept. The reality is that very few companies can actually build their own LLMs. Rather, typically an organization will need to fine-tune existing models, often with the use of transfer learning , an approach that Intel supports and encourages with its hardware and software. With Intel Xeon-based servers deployed in all manner of environments including enterprises, edge, cloud and telcos, Prasad noted that Intel has big expectations for the wide deployment of AI models. “Coopetition” with Nvidia will continue with more performance metrics to come While Intel is clearly competing against Nvidia, Prasad said that in her view it’s a “coopetition” scenario, which is increasingly common across IT in general. In fact, Nvidia is using the 4th Generation Intel Xeon in some of its own products, including the DGX100 that was announced in January. “The world is going towards a ‘coopetition’ environment and we are just one of the participants in it,” Prasad said. Looking forward, she hinted at additional performance metrics from Intel that will be “very positive.” In particular, the next round of MLcommons MLperf AI benchmarking results are due to be released in early April. She also hinted that more hardware is coming soon, including a Habana Guadi3 GPU accelerator, though she did not provide any details or timeline. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,520
2,023
"Oracle brings generative AI to healthcare: Clinical Digital Assistant | VentureBeat"
"https://venturebeat.com/ai/oracle-brings-voice-activated-ai-to-healthcare-with-clinical-digital-assistant"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Oracle brings voice-activated AI to healthcare with Clinical Digital Assistant Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Oracle is moving to embrace generative AI for healthcare. Today, at its annual health conference in Las Vegas, the Larry Ellison-led company announced it is integrating an AI-powered Clinical Digital Assistant into its EHR (electronic health record) solutions to help caregivers automate certain administrative tasks in their workflows and focus on what matters the most: Quality of patient care. The announcement comes at a time when enterprises across sectors are racing to embrace generative AI but healthcare organizations continue to move at their own, steady pace. According to a recent GE Healthcare survey, one of the biggest reasons behind this slowed adoption is the lack of trust in generative AI technologies – stemming from problems like bias in outputs. With the introduction of its proprietary AI assistant, Oracle could help address some of those concerns. The company says it is especially useful for healthcare teams struggling with staffing issues – a problem expected to get worse over the coming years, with a projected shortage of 18 million workers by 2030. It could provide patients with improved self-service experiences. How exactly will Oracle Clinical Digital Assistant help? EHR solutions connect data from different touchpoints and improve the care delivery process, from reviewing previous treatments taken by the patients to prescribing medications. However, in their current form, EHRs require clinicians to interact with the system, which takes time and breaks the care delivery experience patients expect. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With the new generative AI-powered Clinical Digital Assistant, Oracle EHR solutions will provide caregivers with a multimodal helper of sorts, one that could work via both text or voice commands. This way, during an appointment, clinicians no longer have to interact with a screen to find the information needed. They can handle routine tasks — such as seeking the latest MRI scans and prescriptions — by simply calling out to the assistant. According to Oracle, when prompted, the assistant looks up the required elements in the database and delivers all the information — from images to documents — in a relevant order, allowing the physician to gain insight into the appropriate treatment path right away. Plus, it remains active throughout the appointment and uses generative AI to handle administrative tasks like taking notes of the conversation as well as suggesting context-aware next actions, such as ordering medication or scheduling labs and follow-up appointments. The whole thing works on top of Oracle’s broader Digital Assistant platform , designed to help enterprises create chat and voice-based conversational experiences for their business applications. It’s already in use by multiple organizations, including FedEx, Echo, Exelon, Equity Residential and Razer. “By bringing comprehensive generative AI and voice-first capabilities to our EHR platforms, we are not only helping providers reduce mundane work that leads to burnout, but we are also empowering them to create better interactions with patients that establish trust, build loyalty, and deliver better outcomes,” Suhas Uliyar, senior vice president of product management at Oracle Health,” said in a statement. There’s more to it Beyond improving clinicians’ workflow, the Clinical Digital Assistant would also help patients with things like scheduling appointments or paying bills. Oracle says patients will be able to use the bot as a strong source of medical knowledge by asking questions in natural language, similar to the way consumers can interact with popular large language models (LLMs) such as OpenAI’s ChatGPT or Anthropic’s Claude 2. Meanwhile, providers could link it with their secure portal to provide them with helpful information, like reminders to bring lab reports during an upcoming appointment. Currently, only some of these capabilities are rolling out. However, the company expects a full rollout over the next 12 months. The move comes as the latest leg in Oracle’s bigger generative AI effort. Prior to this, the company debuted generative AI features for its Fusion Cloud Human Capital Management (HCM) offering , making it easier for enterprises to handle HR tasks like writing job descriptions or drafting employee surveys. During the company’s fourth-quarter earnings call, Ellison also confirmed they are developing a new cloud service with Toronto-based Cohere to make it easy for enterprises to train their own customized LLMs. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,521
2,023
"Nvidia's beats estimates with quarterly revenue of $7.19B, down 13% from year ago | VentureBeat"
"https://venturebeat.com/ai/nvidias-beats-estimates-with-quarterly-revenue-of-7-19b-down-13-from-year-ago"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia’s beats estimates with quarterly revenue of $7.19B, down 13% from year ago Share on Facebook Share on X Share on LinkedIn Nvidia GeForce RTX4060 Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Nvidia reported revenues of $7.19 billion for the first fiscal quarter ended April 30, down 13% from a year ago. But it beat expectations on Wall Street in the quarter. The maker of AI and graphics chips said it had record data center revenue of $4.28 billion, up from 14% from a year ago. That’s a sign that data center customers are on a recovery path. In after-hours trading, Nvidia’s stock price is up to $375.26 a share, up 23%. Analysts expected Nvidia to post adjusted earnings of 92 cents a share, but Nvidia came in at $1.09 a share, or $2.7 billion, in adjusted net income. That was down 20% from a year ago and up 24% from the previous quarter. Wall Street had only expected $6.53 in revenues in the quarter. Gaming First-quarter revenue was $2.24 billion, down 38% from a year ago and up 22% from the previous quarter. Announced the GeForce RTX 4060 family of GPUs, bringing the advancements of Nvidia Ada Lovelace architecture and DLSS, starting at $299. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Nvidia launched the GeForce RTX 4070 GPU based on the Ada architecture, which enables DLSS 3, real-time ray-tracing and the ability to run most modern games at over 100 frames per second at 1440p resolution. It also added 36 DLSS gaming titles, bringing the total number of games and apps to 300. And Nvidia expanded GeForce Now’s game titles to more than 1,600. Data center First-quarter revenue was a record $4.28 billion, up 14% from a year ago and up 18% from the previous quarter. The company launched four inference platforms that combine the company’s full-stack inference software with the latest Nvidia Ada, Hopper and Grace Hopper processors. Professional visualization First-quarter revenue was $295 million, down 53% from a year ago and up 31% from the previous quarter. Automotive First-quarter revenue was a record $296 million, up 114% from a year ago and up 1% from the previous quarter. The company announced that its automotive design win pipeline has grown to $14 billion over the next six years, up from $11 billion a year ago. Nvidia estimates that revenue for the second fiscal quarter ending July 31 is expected to be $11 billion, a considerable bump upward from the prior quarter. GAAP earnings per diluted share for the quarter were 82 cents, up 28% from a year ago and up 44% from the previous quarter. “The computer industry is going through two simultaneous transitions — accelerated computing and generative AI,” said Jensen Huang, founder and CEO of Nvidia, in a statement. “A trillion dollars of installed global data center infrastructure will transition from general purpose to accelerated computing as companies race to apply generative AI into every product, service and business process.” He added, “Our entire data center family of products — H100, Grace CPU, Grace Hopper Superchip, NVLink, Quantum 400 InfiniBand and BlueField-3 DPU — is in production. We are significantly increasing our supply to meet surging demand for them.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,522
2,023
"RedPajama replicates LLaMA dataset to build open source, state-of-the-art LLMs | VentureBeat"
"https://venturebeat.com/ai/redpajama-replicates-llama-to-build-open-source-state-of-the-art-llms"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages RedPajama replicates LLaMA dataset to build open source, state-of-the-art LLMs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Thought the open source AI references to camelids were finished? Think again: Yesterday, Together , a Menlo Park, California-based company focused on building a decentralized cloud and open source models, announced RedPajama (yes, like Llama Llama Red Pajama ) yesterday. “In many ways, AI is having its Linux moment ,” the company said in a blog post , linking to a January post written by Chris Re, co-founder of Together, Stanford associate professor and co-founder of SambaNova, Snorkel.ai and Factory. RedPajama is a collaborative project between Together, Ontocord.ai , ETH DS3Lab , Stanford CRFM , Hazy Research , and MILA Québec AI Institute to create leading, fully open-source large language models (LLMs). Its effort began with yesterday’s release of a 1.2 trillion token dataset that follows the LLaMA recipe. The data enables any organization to pre-train models that can be permissively licensed. The full dataset is available on Hugging Face and users can reproduce results with Apache 2.0 scripts available on Github. LLaMA is a state-of-the-art foundation LLM released in February by Meta with gated access to researchers. Several other models based on LLaMA have come out in recent weeks, including Alpaca, Vicuna and Koala — but those models have not been available for commercial use. There was also some LLaMA-drama when the LLaMA model was leaked on 4chan. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In the coming weeks, Together will release a full suite of LLMs and instruction tuned versions based on the RedPajama dataset. The company emphasized that the forthcoming models will be fully open-source and commercially viable. In a tweet , the company said, “We hope this can be a clean-room, drama-free version. The RedPajama models we release, starting in the coming weeks, will be released under the Apache 2.0 license.” RedPajama part of a wave of open source AI As VentureBeat reported last week, open source AI has been having a moment over the past few weeks, following the wave of LLM releases and an effort by startups, collectives and academics to push back on the shift in AI to closed, proprietary LLMs. And a camelid-adjacent model, Dolly 2.0 (as in Dolly the Sheep ), also made headlines last week when its developer, Databricks, called it the first open, instruction-following LLM for commercial use. But the largest, state-of-the-art open source LLMs like LLaMA have been limited to the research community. “They are limited in that you can’t build real applications and ship them,” said Vipul Ved Prakash, founder and CEO of Together and previously cofounder of Cloudmark and Topsy. “We think having permissively licensed models is a critical aspect of open source AI.” Replicating the LLaMA dataset was no small task The company started with LLaMa, which it called the “leading suite of open base models,” because it was trained on a “very large dataset that was carefully filtered for quality.” Also, the 7 billion parameter LLaMA model is “trained for much longer, well beyond the Chinchilla-optimal point, to ensure the best quality at that model size.” While neither the dataset nor the model will be identical, the developers aim to create a fully open source reproduction of LLaMA which would be available for commercial applications, and provide a “more transparent pipeline for research.” The developers did not have access to the LLaMA dataset but had enough of a recipe to go on. “We followed the recipe very carefully to essentially recreate [the LLaMA dataset] from scratch,” said Prakash. The dataset consists of seven data slices, including data from Common Crawl, arxiv, Github, Wikipedia and a corpus of open books. “For each data slice, we conduct careful data pre-processing and filtering, and tune our quality filters to roughly match the number of tokens as reported by Meta AI in the LLaMA paper ,” read the blog post. “All of the data LLaMA was trained on is openly available data, but the challenge was that they they didn’t provide the actual data set — there’s a lot of work to go from the overview to the actual data set,” said Prakash. For example, he explained, the paper might describe how they picked the best 10,000 from a million documents, but they didn’t give you the 10,000. “So we followed the recipe to repeat all that work to create an equivalent dataset,” he said. The debate over building transparent systems Prakash said that the RedPajama project collaborators believe it’s important that systems are transparent. “You know exactly how this model was built, what went into it,” he said. “If you’re trying to improve it, you can start from the dataset.” The project also brings together a larger community to these models, he added. “I would say academia has really been cut out of foundation model research because of the level of resources required, starting from data to the compute,” he said. He added that there is a small number of people in the world working on these large models today, and if there was broader access, “a lot of brilliant people” around the world would be able to explore different directions of neural architectures, training algorithms and safety research. “Also, this is one of the first really general AI which can be adapted to different tasks, and we think the applicability is very broad,” he said. “But many different applications are possible only if you have access to the model, the model weights, and adapt them to different computing environments. We see a lot of this happen because of open source AI.” There is another side to the open source AI debate, however. For example, Ilya Sutskever, OpenAI’s chief scientist and co-founder, recently said it was “wrong” to share research so openly, saying fear of competition and fears over safety — were “self-evident.” He added that “at some point it will be quite easy, if one wanted, to cause a great deal of harm with those models.” And in a recent interview with VentureBeat, Joelle Pineau, VP of AI research at Meta, said that while accountability and transparency in AI models is essential, the key for Meta is to balance the level of access, which can vary depending on the potential harm of the model. “My hope, and it’s reflected in our strategy for data access, is to figure out how to allow transparency for verifiability audits of these models,” she said, adding that access could be decided based on the level of potential harm of the model. On the other hand, she said that some levels of openness go too far. “That’s why the LLaMA model had a gated release,” she explained. “Many people would have been very happy to go totally open. I don’t think that’s the responsible thing to do today.” Debates around ethical datasets as well There have also been debates about the ethics of the datasets themselves, whether the models are open or closed. An article last week in The Guardian said that the “enormous datasets used to train the latest generation of these AI systems, like those behind ChatGPT and Stable Diffusion, are likely to contain billions of images scraped from the internet, millions of pirated ebooks, the entire proceedings of 16 years of the European parliament and the whole of English-language Wikipedia.” But Prakash says that he thinks “these models capture in some ways the output of human society and there is a sort of obligation to make them open and usable by everyone.” He added that “most of the magic” of these models comes from the fact that they are trained on “really broad and vast” data. He also pointed out that the original data is compressed significantly in the actual model. The RedPajama dataset is 5 terabytes, and the models can be as small as 14 GB, ~500x smaller than the original data they are modeling. “This means that knowledge from the data is abstracted, transformed and modeled in a very different representation of weights and biases of parameters in the neural network model, and not stored and used in its original form,” said Prakash. So, it is “not reproducing the training data — it is derivative work on top of that. From our understanding, it is considered fair use as long as the model is not reproducing the data — it’s learning from it.” There is no doubt that the open source AI debates are highly-complex. But when asked why the company called the new project RedPajama, the answer was far more simple. “A lot of us have small children,” said Prakash. “It just seemed fun.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,523
2,023
"Stability AI unveils its first LLM, as open-source AI race continues | VentureBeat"
"https://venturebeat.com/ai/stability-ai-unveils-its-first-llm-as-open-source-ai-race-continues"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Stability AI unveils its first LLM, as open-source AI race continues Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Stability AI , the company funding the development of open-source generative AI models like Stable Diffusion and Dance Diffusion, today announced the launch of its StableLM suite of language models. After developing models for multiple domains, including image, audio, video, 3D and biology, this is the first time the developer is jumping into the language model game currently dominated by tech heavyweights such as OpenAI, Meta and Stanford. The suite’s first offering, the StableLM open-source language model, is now available in alpha, featuring 3 billion and 7 billion parameters, both trained on 800 billion data tokens, with larger 15-billion to 65-billion parameter models to follow. In 2022, Stability AI introduced Stable Diffusion, a groundbreaking open-source image model that offers a transparent and scalable alternative to proprietary AI. With the release of the StableLM suite, the company aims to demonstrate how small, efficient models can provide high performance with the appropriate training. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! StableLM is an extension of the company’s foundational AI technology, which promotes transparency, accessibility and support in AI design. Stability AI believes that the release represents another significant step toward making foundational AI technology accessible to all, with numerous applications, including generating text and code. Open-source is the new cool The StableLM suite builds on Stability AI’s prior work, including the groundbreaking Stable Diffusion image model, which offered an open-source alternative to proprietary generative AI image models such as DALL-E. In addition, the Stable language model can generate text and code, making it ideal for various downstream applications. Despite its small size, the model is surprisingly effective in conversational and coding tasks (similar to OpenAI’s ChatGPT) due to its training on an experimental dataset. Stability AI has a track record of open-sourcing earlier language models, such as GPT-J, GPT-NeoX, and the Pythia suite, trained on The Pile open-source dataset. StableLM-Alpha models are trained on the new dataset that builds on The Pile , which contains 1.5 trillion tokens. The new “experimental dataset” is supposedly three times larger than The Pile, the context length for the StableLM models being 4,096 tokens. Stability AI is strongly committed to transparency and accessibility in AI design, and the StableLM suite is no exception. Developers are encouraged to freely inspect, use and adapt the StableLM base models for commercial or research purposes, subject to the terms of the CC BY-SA-4.0 license. Under the license, you must give credit to Stability AI, provide a link to the license, and indicate if changes were made. According to the license document, users may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses them or their use. In a post , the company announced that the StableLM suite also includes a set of research models that are instruction fine-tuned, using a combination of five recent open-source datasets for conversational agents. As a proof of concept, the company fine-tuned the StableLM model with Stanford Alpaca’s procedure using a combination of five recent datasets for conversational agents: Stanford’s Alpaca , Nomic-AI’s gpt4all , RyokoAI’s ShareGPT52K datasets, Databricks labs’ Dolly and Anthropic’s HH , and will be releasing these models as StableLM-Tuned-Alpha. Stability AI said an upcoming technical report would document the model’s specifications and the training settings. These models are also intended for research use only and are released under the noncommercial CC BY-NC-SA 4.0 license, in line with Stanford’s Alpaca license. The LLM race just got bigger The 800 billion-token training dataset is notable compared to Meta’s LLaMA language model, trained on 1 trillion tokens for 7 billion parameters. Recently, Menlo Park-based firm Together announced the launch of RedPajama , an open-source project developed in collaboration with several AI institutions including Ontocord AI , ETH DS3Lab , Stanford CRFM , Hazy Research and MILA Québec AI Institute. That project is quite similar to Stability AI’s approach, aiming to create large language models (LLMs) that are fully open source and lead the industry in performance. The initial dataset released by RedPajama contains 1.2 trillion tokens and adheres to the LLaMA recipe, despite being significantly smaller than Meta’s LLaMA model. Its dataset is publicly available on Hugging Face , while Apache 2.0 scripts on GitHub can be used to reproduce the results. According to Stability AI, language models are the backbone of the digital economy, and everyone should have a voice in their design. By offering fine-grained access to the models, the company hopes to encourage the development of interpretability and safety techniques beyond what is possible with closed models. The company’s models are now available in its GitHub repository, and Stability AI plans to publish a full technical report in the near future. Stability AI is also seeking to grow its team and is looking for individuals passionate about democratizing access to this technology and experienced in LLMs. For those interested, the company is accepting applications on its website. In addition to its work on the StableLM suite, Stability AI is kicking off its crowd-sourced RLHF program and working with community efforts such as Open Assistant , an initiative to create an open-source dataset for AI assistants. The company plans to release more models soon and says it is excited to collaborate with developers and researchers to roll out the StableLM suite. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,524
2,023
"‘I didn’t give permission’: Do AI’s backers care about data law breaches? | Artificial intelligence (AI) | The Guardian"
"https://www.theguardian.com/technology/2023/apr/10/i-didnt-give-permission-do-ais-backers-care-about-data-law-breaches"
"Regulators around world are cracking down on content being hoovered up by ChatGPT, Stable Diffusion and others US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness A demonstrator holding a ‘No AI’ placard. In Italy, ChatGPT has been banned after the regulator said there appeared to be no legal basis to justify the collection and storage of personal data. Photograph: Wachiwit/Alamy A demonstrator holding a ‘No AI’ placard. In Italy, ChatGPT has been banned after the regulator said there appeared to be no legal basis to justify the collection and storage of personal data. Photograph: Wachiwit/Alamy Artificial intelligence (AI) ‘I didn’t give permission’: Do AI’s backers care about data law breaches? Regulators around world are cracking down on content being hoovered up by ChatGPT, Stable Diffusion and others and Mon 10 Apr 2023 05.10 EDT C utting-edge artificial intelligence systems can help you escape a parking fine , write an academic essay , or fool you into believing Pope Francis is a fashionista. But the virtual libraries behind this breathtaking technology are vast – and there are concerns they are operating in breach of personal data and copyright laws. The enormous datasets used to train the latest generation of these AI systems, like those behind ChatGPT and Stable Diffusion, are likely to contain billions of images scraped from the internet, millions of pirated ebooks, the entire proceedings of 16 years of the European parliament and the whole of English-language Wikipedia. But the industry’s voracious appetite for big data is starting to cause problems, as regulators and courts around the world crack down on researchers hoovering up content without consent or notice. In response, AI labs are fighting to keep their datasets secret, or even daring regulators to push the issue. In Italy, ChatGPT has been banned from operating after the country’s data protection regulator said there was no legal basis to justify the collection and “massive storage” of personal data in order to train the GPT AI. On Tuesday, the Canadian privacy commissioner followed suit with an investigation into the company in response to a complaint alleging “the collection, use and disclosure of personal information without consent”. Britain’s data watchdog expressed its own concerns. “Data protection law still applies when the personal information that you’re processing comes from publicly accessible sources,” said Stephen Almond, the director of technology and innovation at the Information Commissioner’s Office. Michael Wooldridge, a professor of computer science at the University of Oxford, says “large language models” (LLMs), such as those that underpin OpenAI’s ChatGPT and Google’s Bard, hoover up colossal amounts of data. “This includes the whole of the world wide web – everything. Every link is followed in every page, and every link in those pages is followed … In that unimaginable amount of data there is probably a lot of data about you and me,” he says, adding that comments about a person and their work could also be gathered by an LLM. “And it isn’t stored in a big database somewhere – we can’t look to see exactly what information it has on me. It is all buried away in enormous, opaque neural networks.” Wooldridge says copyright is a “coming storm” for AI companies. LLMs are likely to have accessed copyrighted material, such as news articles. Indeed the GPT-4-assisted chatbot attached to Microsoft’s Bing search engine cites news sites in its answers. “I didn’t give explicit permission for my works to be used as training data, but they almost certainly were, and now they contribute to what these models know,” he says. “Many artists are gravely concerned that their livelihoods are at risk from generative AI. Expect to see legal battles,” he adds. Lawsuits have emerged already, with the stock photo company Getty Images suing the British startup Stability AI – the company behind the AI image generator Stable Diffusion – after claiming that the image-generation firm violated copyright by using millions of unlicensed Getty Photos to train its system. In the US a group of artists is suing Midjourney and Stability AI in a lawsuit that claims the companies “violated the rights of millions of artists” in developing their products by using artists’ work without their permission. A sketch drawn by Kris Kashtanova that the artist fed into the AI program Stable Diffusion and transformed into the resulting image using text prompts. Awkwardly for Stability, Stable Diffusion will occasionally spit out pictures with a Getty Images watermark intact, examples of which the photography agency included in its lawsuit. In January, researchers at Google even managed to prompt the Stable Diffusion system to recreate near-perfectly one of the unlicensed images it had been trained on, a portrait of the US evangelist Anne Graham Lotz. Copyright lawsuits and regulator actions against OpenAI are hampered by the company’s absolute secrecy about its training data. In response to the Italian ban, Sam Altman, the chief executive of OpenAI, which developed ChatGPT, said: “We think we are following all privacy laws.” But the company has refused to share any information about what data was used to train GPT-4, the latest version of the underlying technology that powers ChatGPT. Even in its “ technical report ” describing the AI, the company curtly says only that it was trained “using both publicly available data (such as internet data) and data licensed from third-party providers”. Further information is hidden, it says, due to “both the competitive landscape and the safety implications of large-scale models like GPT-4”. Others take the opposite view. EleutherAI describes itself as a “non-profit AI research lab”, and was founded in 2020 with the goal of recreating GPT-3 and releasing it to the public. To that end, the group put together the Pile, an 825-gigabyte collection of datasets gathered from every corner of the internet. It includes 100GB of ebooks taken from the pirate site bibliotik, another 100GB of computer code scraped from Github, and a collection of 228GB of websites gathered from across the internet since 2008 – all, the group acknowledges, without the consent of the authors involved. Sign up to TechScape Free weekly newsletter Alex Hern's weekly dive in to how technology is shaping our lives Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply. after newsletter promotion Eleuther argues that the datasets in the Pile have all been so widely shared already that its compilation “does not constitute significantly increased harm”. But the group does not take the legal risk of directly hosting the data, instead turning to a group of anonymous “data enthusiasts” called the Eye, whose copyright takedown policy is a video of a choir of clothed women pretending to masturbate their imaginary penises while singing. Some of the information produced by chatbots has also been false. ChatGPT has falsely accused a US law professor, Jonathan Turley, of George Washington University, of sexually harassing one of his students - citing a news article that didn’t exist. The Italian regulator had also referred to the fact that ChatGPT’s responses do not “always match factual circumstances” and “inaccurate personal data are processed”. An annual report into progress in AI showed that commercial players were dominating the industry, over academic institutions and governments. According to the 2023 AI Index report , compiled by California-based Stanford University, last year there were 32 significant industry-produced machine-learning models, compared with three produced by academia. Up until 2014, most of the significant models came from the academic sphere, but since then the cost of developing AI models, including staff and computing power, has risen. “Across the board, large language and multimodal models are becoming larger and pricier,” the report said. An early iteration of the LLM behind ChatGPT, known as GPT-2, had 1.5bn parameters, analogous to the neurons in a human brain, and cost an estimated $50,000 to train. By comparison, Google’s PaLM had 540bn parameters and cost an estimated $8m. This has raised concerns that corporate entities will take a less measured approach to risk than academic or government-backed projects. Last week a letter whose signatories included Elon Musk and the Apple co-founder Steve Wozniak called for an immediate pause in the creation of “giant AI experiments” for at least six months. The letter said there were concerns that tech firms were creating “ever more powerful digital minds” that no one could “understand, predict, or reliably control”. Dr Andrew Rogoyski, of the Institute for People-Centred AI at the University of Surrey, in England, said: “Big AI means that these AIs are being created purely by large profit-driven corporates, which unfortunately means that our interests as human beings aren’t necessarily well represented. He added: “We have to focus our efforts on making AI smaller, more efficient, requiring less data, less electricity, so that we can democratise access to AI.” Explore more on these topics Artificial intelligence (AI) ChatGPT OpenAI Google Computing features More on this story More on this story The TikTok mouth-taping trend may not be as beneficial as you’re told 25 Sept 2023 Social media firms ‘not ready to tackle misinformation’ during global elections 15 Sept 2023 EU unveils ‘revolutionary’ laws to curb big tech firms’ power 6 Sept 2023 How the EU Digital Services Act affects Facebook, Google and others 25 Aug 2023 Google to launch privacy tools which remove unwanted personal images 3 Aug 2023 US economist pulls out of key EU tech regulation role after Macron criticism 19 Jul 2023 Zuckerberg uses Threads to say Twitter has missed its chance 6 Jul 2023 Threads v Twitter – is this the main bout between Musk and Zuckerberg? 4 Jul 2023 Appointment of American by EU directorate ‘dubious’, says Macron 18 Jul 2023 Mark Zuckerberg completes extreme fitness challenge in 9kg vest 1 Jun 2023 Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top "
14,525
2,021
"Nvidia unveils Grace ARM-based CPU for giant-scale AI and HPC apps | VentureBeat"
"https://venturebeat.com/ai/nvidia-unveils-grace-arm-based-cpu-for-giant-scale-ai-and-hpc-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia unveils Grace ARM-based CPU for giant-scale AI and HPC apps Share on Facebook Share on X Share on LinkedIn Nvidia's Grace CPU for datacenters is named after Grace Hopper. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nvidia unveiled its Grace processor today. It’s an ARM-based central processing unit (CPU) for giant-scale artificial intelligence and high-performance computing applications. It’s Nvidia’s first datacenter CPU, purpose-built for applications that are operating on a giant scale, Nvidia CEO Jensen Huang said in a keynote speech at Nvidia’s GTC 2021 event. Grace delivers 10 times the performance leap for systems training giant AI models, using energy-efficient ARM cores. And Nvidia said the Swiss Supercomputing Center and the U.S. Department of Energy’s Los Alamos National Laboratory will be the first to use Grace, which is named for Grace Hopper , who pioneered computer programming in the 1950s. The CPU is expected to be available in early 2023. “Grace is a breakthrough CPU. It’s purpose-built for accelerated computing applications of giant scale for AI and HPC,” said Paresh Kharya, senior director of product management and marketing at Nvidia, in a press briefing. Huang said, “It’s the world’s first CPU designed for terabyte scale computing.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Grace is the result of 10,000 engineering years of work. The CPU is the result of more than 10,000 engineering years of work. Nvidia said the chip will address the computing requirements for the world’s most advanced applications — including natural language processing, recommender systems, and AI supercomputing — that analyze enormous datasets requiring both ultra-fast compute performance and massive memory. Grace combines energy-efficient ARM CPU cores with an innovative low-power memory subsystem to deliver high performance with great efficiency. The chip will use a future ARM core dubbed Neoverse. “Leading-edge AI and data science are pushing today’s computer architecture beyond its limits — processing unthinkable amounts of data,” Huang said in his speech. “Using licensed ARM IP, Nvidia has designed Grace as a CPU specifically for giant-scale AI and HPC. Coupled with the GPU and DPU, Grace gives us the third foundational technology for computing and the ability to re-architect the datacenter to advance AI. Nvidia is now a three-chip company.” Grace is a highly specialized processor targeting workloads such as training next-generation NLP models that have more than 1 trillion parameters. When tightly coupled with Nvidia GPUs, a Grace-based system will deliver 10 times faster performance than today’s Nvidia DGX-based systems, which run on x86 CPUs. In a press briefing, someone asked if Nvidia will compete with x86 chips from Intel and AMD. Kharya said, “We are not competing with x86 … we continue to work very well with x86 CPUs.” Above: The Alps supercomputer will use Grace CPUs from Nvidia. Grace is designed for AI and HPC applications, but Nvidia isn’t disclosing additional information about where Grace will be used today. Nvidia also declined to disclose the number of transistors in the Grace chip. Nvidia is introducing Grace as the volume of data and size of AI models grow exponentially. Today’s largest AI models include billions of parameters and are doubling every two and a half months. Training them requires a new CPU that can be tightly coupled with a GPU to eliminate system bottlenecks. “The biggest announcement of GTC 21 was Grace, a tightly integrated CPU for over a trillion parameter AI models,” said Patrick Moorhead, an analyst at Moor Insights & Strategies. “It’s hard to address those with classic x86 CPUs and GPUs connected over PCIe. Grace is focused on IO and memory bandwidth, shares main memory with the GPU and shouldn’t be confused with general purpose datacenter CPUs from AMD or Intel.” Underlying Grace’s performance is 4th-gen Nvidia NVLink interconnect technology, which provides 900 gigabyte-per-second connections between Grace and Nvidia graphics processing units (GPUs) to enable 30 times higher aggregate bandwidth compared to today’s leading servers. Grace will also utilize an innovative LPDDR5x memory subsystem that will deliver twice the bandwidth and 10 times better energy efficiency compared with DDR4 memory. In addition, the new architecture provides unified cache coherence with a single memory address space, combining system and HBM GPU memory to simplify programmability. “The Grace platform and its Arm CPU is a big new step for Nvidia,” said Kevin Krewell, an analyst at Tirias Research, in an email. “The new design of one custom CPU attached to the GPU with coherent NVlinks is Nvidia’s new design to scale to ultra-large AI models that now take days to run. The key to Grace is that using the custom Arm CPU, it will be possible to scale to large LPDDR5 DRAM arrays far larger than possible with high-bandwidth memory directly attached to the GPUs.” Above: The Los Alamos National Laboratory will use Grace CPUs. Grace will power the world’s fastest supercomputer for the Swiss organization. Dubbed Alps, the machine will feature 20 exaflops of AI processing. (This refers to the amount of computing available for AI applications.) That’s about 7 times more computation than is available with the 2.8-exaflop Nvidia Seline supercomputer, the leading AI supercomputer today. HP Enterprise will be building the Alps system. Alps will work on problems in areas ranging from climate and weather to materials sciences, astrophysics, computational fluid dynamics, life sciences, molecular dynamics, quantum chemistry, and particle physics, as well as domains like economics and social sciences, and will come online in 2023. Alps will do quantum chemistry and physics calculations for the Hadron collider, as well as weather models. Above: Jensen Huang, CEO of Nvidia, at GTC 21. “This is a very balanced architecture with Grace and a future Nvidia GPU, which we have not announced yet, to enable breakthrough research on a wide range of fields,” Kharya said. Meanwhile, Nvidia also said that it would make its graphics chips available with Amazon Web Services’ Graviton2 ARM-based CPU for datacenters for cloud computing. With Grace, Nvidia will embark on a mult-year pattern of creating graphics processing units, CPUs, and data processing units (CPUs), and it will alternate between Arm and x86 architecture designs, Huang said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,526
2,019
"Microsoft is banking Cortana's success on the idea of a multi-assistant world | VentureBeat"
"https://venturebeat.com/ai/microsoft-is-banking-cortanas-success-on-the-idea-of-a-multi-assistant-world"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft is banking Cortana’s success on the idea of a multi-assistant world Share on Facebook Share on X Share on LinkedIn Microsoft general manager Megan Saunders and Amazon VP Tom Taylor onstage at Build 2018 after a demo of Cortana and Alexa working together Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In the competitive landscape of virtual assistants, Cortana has struggled to find its place. It lags behind competitors like Google Assistant, Alexa, and Siri in the delivery of satisfactory responses to questions , and with no smart speaker or mobile operating system, it lacks native access to two of the most common devices people use to speak with AI assistants. It may still have the power to act as a general purpose assistant, but Microsoft wants Cortana to become your assistant at work. The focus is on making Cortana a larger part of Microsoft 365 productivity software for the workplace, which sees applications like Outlook, Word, and PowerPoint currently being used by more than 180 million monthly active users. “We are really focusing on this experience, embedding [Cortana] across M365. That’s really the message,” said Microsoft corporate VP Andrew Shuman. That strategy played out across multiple upgrades and announcements that Microsoft made Monday at its Ignite 2019 conference. Cortana can now read your email summaries and send quick-reply responses in Outlook. The AI assistant is also getting into the dicey business of scheduling meetings, as well as delivering daily schedules and task rundowns. Excel now supports natural language queries, so you can ask questions about your Excel data, and you can use Cortana as a kind of coach. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Email briefings from Cortana in Outlook can suggest focus time, and last month Microsoft launched Presenter Coach , a PowerPoint service that listens to your presentations and then provides feedback on pace, use of inclusive language, and repetitive use of mannerisms like “umm” and “basically.” As other assistants focus on consumer use cases, Cortana is now able to transcribe your meetings, perform voice email playback, find and automatically remind you about tasks in your emails, and schedule your meetings. Cortana has also entered Microsoft Teams and Skype in recent years. But absent any first-party hardware or a mobile operating system of its own, that means outside of Windows 10, Cortana is going to have live or die in a multi-assistant capacity, alongside the virtual assistants that are its direct competitors. The future of Cortana hardware? The double-down on integration with Microsoft software doesn’t address the fact that the company missed its chance on hardware. There’s no first-party Microsoft smart speaker with Cortana, like Google Nest or Amazon Echo, for example. The Harman Kardon Invoke , one of the only speakers with Cortana inside, saw little commercial success. Smart home integrations and Cortana skills don’t seem to see much adoption either. That leaves Microsoft’s intentions for future hardware difficult to follow. In a conversation with VentureBeat last year, former Cortana product lead Javier Soltero stressed that Cortana adoption rates may be partially based upon Cortana’s ability to grow a presence in the home. On the other hand, Microsoft CTO Kevin Scott told VentureBeat that Cortana doesn’t need a smart speaker to succeed as an AI assistant and can instead rely on legacy strongholds like Windows 10. Microsoft’s Surface headphones released a year ago and the recently released Surface earbuds are devices marketed to busy professionals, not particularly the home. In an interview ahead of news Monday, Shuman talked to VentureBeat about Cortana’s focus on enterprise applications, Cortana’s future in hardware, and the new Voice Interoperability Initiative to make a multi-assistant world. “I echo Kevin’s point about ambient devices,” he said. “I think we’re going to continue to work on our Amazon partnership and thinking a lot about how M365 users who have an Amazon speaker can get a great experience, as we have that in beta already today and we’ll do more there,” Shuman said. When asked about the less-than-clear hardware message Cortana can present, Shuman said Microsoft will look for opportunities with mobile devices. “We feel ever more convicted that getting ourselves into a great position on the [mobile] device you already have, you already trust with a lot of your data, but really being able to enhance that experience, because it is hard to do some stuff on the phone. That will be the way forward for us,” Shuman said. In news that may run counter to the idea of a mobile strategy for Cortana, word emerged today that the AI assistant may be removed from the Microsoft Launcher for Android smartphones. VentureBeat reached out to Microsoft for comment. This story will be updated if we hear back. Exactly how to define success for an AI assistant can depend heavily on existing market advantages. Cortana’s current focus is on leveraging mobile and PC software use cases. Meanwhile, Samsung’s Bixby might be losing a dedicated buttons on smartphones, and its Galaxy Home may never be a hit, but it could still succeed as a second-class assistant with third-party offerings and integrations with popular home appliances. Microsoft would benefit from a multi-assistant world — and indeed it has to, because it lacks the competitive advantage of a popular mobile operating system or smart devices, like speakers or displays, that reside in people’s homes. How to define AI assistant success in a multi-assistant world In September, ahead of Amazon’s Alexa major annual hardware event, the Voice Interoperability Initiative launched with 30 partners, including Amazon and Microsoft, as well as Baidu, Tencent, Intel, Qualcomm, and Salesforce. Notably missing from the group are Apple and Google, makers of mobile operating systems that are most likely to be used for interaction with an AI assistant. “I think that’s why I’m most excited about that consortium, is to recognize that speech and natural language breaks down barriers between these experiences […] much more than any app platform ever has, because you’re not going to just think, ‘Oh, I’m only doing calendaring right now; I’m only going to talk about calendaring.’ No, you’re going to say … ‘Is there a cafe near that appointment I’m having?’,” Shuman said. “Those things are just going to become more and more part and parcel of this with much fuzzier lines, and that’s where I think that that should go.” Microsoft has been pushing the notion of a multi-assistant world since it shared plans with Amazon to make Cortana available through Echo speakers and make Alexa available through Windows 10 PCs. Amazon’s Echo is the most popular line of smart speakers in the United States. “I think we’re going to continue to work on our Amazon partnership and thinking a lot about how M365 users who have an Amazon speaker […] can get a great experience,” Shuman said, adding that the project is already in beta, with plans to “do more there.” Shuman added, “In my life, I have maybe a doctor or a coach or a therapist, I have multiple assistants who are helping me all the time. You’re going to have multiple digital assistants, and they’re going to do different things for the right places. I mean, we’re never going to be an ecommerce company, so it’s really great that we can think about how Amazon can extend our experiences in the right ways.” Microsoft CEO Satya Nadella expressed a desire to connect Cortana with Google Assistant earlier this year, too. There’s still much to learn about people’s habits with a voice ecosystem. Canalys predicted the sale of more than 200 million smart speakers worldwide by the end of 2019, up from 114 million in 2018. Voice assistant usage is up, but it’s unclear whether people are actually open to the idea of using more than one assistant in their lives. A multi-assistant plan might work, but Microsoft needs it to for Cortana to be successful going forward. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,527
2,023
"CoreWeave came 'out of nowhere.' Now it's poised to make billions off AI with its GPU cloud | VentureBeat"
"https://venturebeat.com/ai/coreweave-came-out-of-nowhere-now-its-poised-to-make-billions-off-of-ai-with-its-gpu-cloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CoreWeave came ‘out of nowhere.’ Now it’s poised to make billions off AI with its GPU cloud Share on Facebook Share on X Share on LinkedIn Image: CanvaPro/VentureBeat Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A few months ago, few had heard of CoreWeave , a cloud startup specializing in GPU-accelerated workloads, according to Brannin McBee, the company’s co-founder and chief strategy officer. Now, CoreWeave is poised to make billions off of the generative AI boom with its GPU cloud. “We’ve come out of nowhere,” he told VentureBeat in an interview last week. With over $400 million in new funding; a new $1.6 billion data center in Plano, Texas; and the world’s fastest AI supercomputer built in partnership with Nvidia unveiled last month, the company’s fortunes have shifted dramatically — thanks, in no small part, to Nvidia. CoreWeave: from crypto mining to GPU acceleration at scale CoreWeave was founded in 2017 by three commodities traders who turned their cryptocurrency mining hobby into an Ethereum mining company, using GPUs to verify blockchain transactions — doing business out of a New Jersey data center. By 2019, the founders had pivoted — fortuitously, in hindsight — to building a specialized cloud infrastructure spanning seven facilities that offered GPU acceleration at scale. Suddenly, everyone was talking about CoreWeave, which led to an investment “tide shift” in March of this year, said McBee. “People were still able to access GPUs last year, but when it became extremely tight, all of a sudden it was like, where do we get these things?” he explained. AI companies that were using CoreWeave spread the word to VCs, he added, who suddenly saw a gold mine: “They said, ‘Why aren’t we speaking to these guys’?” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That led to a massive $221 million Series B funding round in April, which included an investment from Nvidia, and one month later, CoreWeave secured another $200 million. McBee said CoreWeave did $30 million in revenue last year, will score $500 million this year and has nearly $2 billion already contracted for next year. CNBC reported in June that Microsoft “has agreed to spend potentially billions of dollars over multiple years on cloud computing infrastructure from startup CoreWeave.” “It’s happening very, very quickly,” he said. “We have a massive backlog of client demand we’re trying to build for. We’re also building at 12 different data centers right now. I’m engaged in something like one of the largest builds of this infrastructure on the planet today, at a company that you had never heard of three months ago.” Nvidia has diverted latest AI server chips to CoreWeave In addition to being in the right place with the right technology at the right time, CoreWeave has also benefitted significantly from Nvidia’s strategy to stay dominant in the AI space. Nvidia has allotted a generous number of its latest AI server chips to CoreWeave and away from top cloud providers like AWS, even though supply is tight. That’s because those companies are developing their own AI chips in an attempt to reduce their reliance on Nvidia. “It’s certainly isn’t a disadvantage to not be building our own chips,” McBee admitted. “I would imagine that that certainly helps us in our constant effort to get more GPUs from Nvidia at the expense of our peers.” But while having Nvidia in their corner is “excellent” for CoreWeave, ultimately, McBee said, over time there will a matrix of different pieces and types of infrastructure that support different types of AI models. However, he believes GPUs will remain the infrastructure that supports the most cutting-edge, most compute-intensive models that get developed — and that it will take at least another two years, if not three, for the GPU supply shortage to begin to alleviate. CoreWeave clients include Inflection AI For now, top AI companies like Inflection AI, which recently announced an eye-popping $1.3 billion funding round to build a massive GPU cluster, are using CoreWeave to build it. “They called us and said, ‘Guys, we need you to build one of the most high-performance supercomputers on the planet to support our AI company,'” McBee said. “They call us and they say, ‘This is what we’re looking for, can you do it?’ — and I think we have two of the top five supercomputers that we are building right now in terms of FLOPS. ” For a client like Inflection, he explained, CoreWeave comes up with a timeline for the large build, and then explains to Nvidia what they are doing. “They say, ‘We’ll support you with engineering, marketing, infrastructure, allocation, whatever you need to get this done,'” he said. “Then we go execute, and that’s also where our background comes in — we’ve consistently executed and we’ve built a differentiated product in the market.” The future looks as bright as the GPU gold rush for CoreWeave Now that it has become a household name in AI, CoreWeave is making plans to establish itself in industries such as life sciences — for areas like drug discovery, protein folding simulations, molecular discovery and genetics testing. “That all requires the type of compute that we’re operating and they need to be able to access it in massive scale,” he explained. But at the moment, McBee sounds a bit like a kid in a GPU candy store when it comes to CoreWeave’s efforts to power the generative AI boom. “Walking into the data centers is the most amazing thing,” he said. “I’ve been building computers since I was a kid. So it’s just fun to be able to be in a business where you’re actually able to do this and scale it and power some of the coolest companies on the planet right now.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,528
2,023
"To combat GPU shortage for generative AI, startup works to optimize hardware | VentureBeat"
"https://venturebeat.com/ai/to-combat-gpu-shortage-for-generative-ai-startup-works-to-optimize-hardware"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages To combat GPU shortage for generative AI, startup works to optimize hardware Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI startup CentML , which optimizes machine learning models to work faster and lower compute costs, emerged from stealth today. The Toronto-based company aims to help address the worldwide shortage of GPUs needed for training and inference of generative AI models. According to the company, access to compute is one of the biggest obstacles to AI development, and the scarcity is only going to increase as inference workloads accelerate. By extending the yield of the current AI chip supply and legacy inventory without affecting accuracy, CentML says it can increase access to compute in what it calls a “broken” marketplace for GPUs. Hard for smaller companies to access GPUs CentML raised a $3.5 million seed round in 2022 led by AI-focused Radical Ventures. Cofounder and CEO Gennady Pekhimenko, a leading systems architect, told VentureBeat in an interview that when he saw the trajectory of the size of large language models , it was clear that whoever owned the hardware and the software stack on top of them would have a dominant position. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “It was very transparent what was happening,” he said, adding with a laugh that even he put his money into Nvidia , which controls about 80% of the GPU market. But Nvidia, he explained, always wants to sell its most expensive chips, like the latest A100 and H100 GPUs, and that has made it hard for smaller companies to get access. Yet Nvidia has other, less expensive chips that are poorly utilized: “We build software that optimizes those models efficiently on all the GPUs available, not just on the most expensive available in the cloud,” he said. “We’re essentially serving a larger part of the market.” As the cost of inference grows “exponentially” (models like ChatGPT cost millions of dollars to run), CentML uses a powerful open-source compiler to automatically tune optimizations to work best for a company’s specific inference pipeline and hardware. A competitor like OctoML , Pekhimenko said, is also built on compiler technology to automatically maximize model performance, but an older technology. “Their solution is not competitive in the cloud. We knew what the deficiencies were and built a new technology that doesn’t have those deficiencies,” he said. “So we have the benefit of coming second.” Race to access AI chips has become like Game of Thrones David Katz, partner at Radical Ventures, says the battle to get access to AI chips has become like Game of Thrones — though less gory. “There’s this insatiable appetite for compute that’s required in order to run these models and large models,” he told VentureBeat, adding that Radical invested in CentML last year. CentML’s offering, he said, creates “a little bit more efficiency” in the market. In addition, it demonstrates that complex, billion-plus-parameter models can also run on legacy hardware. “So you don’t need the same volume of GPUs or you don’t need the A100s necessarily,” he said. “From that perspective, it is essentially increasing the capacity or the supply of chips in the market.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,529
2,021
"GPT-3 comes to the enterprise with Microsoft's Azure OpenAI Service | VentureBeat"
"https://venturebeat.com/ai/gpt-3-comes-to-the-enterprise-with-microsofts-azure-openai-service"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GPT-3 comes to the enterprise with Microsoft’s Azure OpenAI Service Share on Facebook Share on X Share on LinkedIn Azure OpenAI Service uses GPT-3 to convert transcripts of live television commentary during a women’s basketball game into short game summaries. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. During its Ignite conference this week, Microsoft unveiled the Azure OpenAI Service, a new offering designed to give enterprises access to OpenAI’s GPT-3 language model and its derivatives along with security, compliance, governance, and other business-focused features. Initially invite-only as a part of Azure Cognitive Services , the service will allow access to OpenAI’s API through the Azure platform for use cases like language translation, code generation, and text autocompletion. According to Microsoft corporate VP for Azure AI Eric Boyd, companies can leverage the Azure OpenAI Service for marketing purposes, like helping teams brainstorm ideas for social media posts or blogs. They could also use it to summarizing common complaints in customer service logs or assist developers with coding by minimizing the need to stop and search for examples. “We are just in the beginning stages of figuring out what the power and potential of GPT-3 is, which is what makes it so interesting,” he added in a statement. “Now we are taking what OpenAI has released and making it available with all the enterprise promises that businesses need to move into production.” Large language models Built by OpenAI, GPT-3 and its fine-tuned derivatives, like Codex , can be customized to handle applications that require a deep understanding of language, from converting natural language into software code to summarizing large amounts of text and generating answers to questions. People have used it to automatically write emails and articles , compose poetry and recipes, create website layouts, and create code for deep learning in a dozen programming languages. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! GPT-3 has been publicly available since 2020 through the OpenAI API; OpenAI has said that GPT-3 is now being used in more than 300 different apps by “tens of thousands” of developers and producing 4.5 billion words per day. But according to Microsoft corporate VP of AI platform John Montgomery, who spoke recently with VentureBeat in an interview, the Azure OpenAI Service enables companies to deploy GPT-3 in a way that complies with the laws, regulations, and technical requirements (for example, scaling capacity, private networking, and access management) unique to their business or industry. “When you’re operating a national company, sometimes, your data can’t [be used] in a particular geographic region, for example. The Azure OpenAI Service can basically put the model in the region that you need for you,” Montgomery said. “For [our business customers,] it comes down to question like, ‘How do you handle our security requirements?’ and ‘How do you handle things like virtual networks?’ Some of them need all of their API endpoints to be centrally managed or use customer-supplied keys for encryption … What the Azure OpenAI Service does is it folds all of these Azure backplane capabilities [for] large enterprise customers [into a] true production deployment to open the GPT-3 technology.” Montgomery also points out that the Azure OpenAI Service makes billing more convenient by charging for model usage under a single Azure bill, versus separately under the OpenAI API. “That makes it a bit simpler for customers to pay and consume,” he said. “Because at this point, it’s one Azure bill.” Enterprises are indeed increasing their investments in natural language processing (NLP), the subfield of linguistics, computer science, and AI concerned with how algorithms analyze large amounts of language. According to a 2021 survey from John Snow Labs and Gradient Flow, 60% of tech leaders indicated that their NLP budgets grew by at least 10% compared to 2020, while a third — 33% — said that their spending climbed by more than 30%. Customization and safety As with the OpenAI API, the Azure OpenAI Service will allow customers to tune GPT-3 to meet specific business needs using examples from their own data. It’ll also provide “direct access” to GPT-3 in a format designed to be intuitive for developers to use, yet robust enough for data scientists to work with the model as they wish, Boyd says. “It really is a new paradigm where this very large model is now itself the platform. So companies can just use it and give it a couple of examples and get the results they need without needing a whole data science team and thousands of GPUs and all the resources to train the model,” he said. “I think that’s why we see the huge amount of interest around businesses wanting to use GPT-3 — it’s both very powerful and very simple.” Of course, it’s well-established that models like GPT-3 are far from technically perfect. GPT-3 was trained on more than 600GB of text from the web, a portion of which came from communities with pervasive gender, race, physical , and religious prejudices. Studies show that it, like other large language models, amplifies the biases in data on which it was trained. In a paper , the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism claimed that GPT-3 can generate “informational” and “influential” text that might radicalize people into far-right extremist ideologies and behaviors. A group at Georgetown University has used GPT-3 to generate misinformation, including stories around a false narrative, articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation. Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of bias from some of the most popular open source models, such as Google’s BERT and XLNet and Facebook’s RoBERTa. Even fine-tuned models struggle to shed prejudice and other potentially harmful characteristics. For example, Codex can be prompted to generate racist and otherwise objectionable outputs as executable code. When writing code comments with the prompt “Islam,” Codex outputs the word “terrorist” and “violent” at a greater rate than with other religious groups. More recent research suggests that toxic language models deployed into production might struggle to understand aspects of minority languages and dialects. This could force people using the models to switch to “white-aligned English” to ensure the models work better for them, or discourage minority speakers from engaging with the models at all. OpenAI claims to have developed techniques to mitigate bias and toxicity in GPT-3 and its derivatives, including code review, documentation, user interface design, content controls, and toxicity filters. And Microsoft says it will only make the Azure OpenAI Service available to companies who plan to implement “well-defined” use cases that incorporate its responsible principles and strategies for AI technologies. Beyond this, Microsoft will deliver safety monitoring and analysis to identify possible cases of abuse or misuse as well as new tools to filter and moderate content. Customers will be able to customize those filters according to their business needs, Boyd says, while receiving guidance from Microsoft on using the Azure OpenAI Service “successfully and fairly.” “This is a really critical area for AI generally and with GPT-3 pushing the boundaries of what’s possible with AI, we need to make sure we’re right there on the forefront to make sure we are using it responsibly,” Boyd said. “We expect to learn with our customers, and we expect the responsible AI areas to be places where we learn what things need more polish.” OpenAI and Microsoft OpenAI’s deepening partnership with Microsoft reflects the economic realities that the company faces. It’s an open secret that AI is a capital-intensive field — in 2019, OpenAI became a for-profit company called to secure additional funding while staying controlled by a nonprofit, having previously been a 501(c)(3) organization. And in July, OpenAI disbanded its robotics team after years of research into machines that can learn to perform tasks like solving a Rubik’s Cube. Roughly a year ago, Microsoft announced it would invest $1 billion in San Francisco-based OpenAI to jointly develop new technologies for Microsoft’s Azure cloud platform. In exchange, OpenAI agreed to license some of its intellectual property to Microsoft, which the company would then package and sell to partners, and to train and run AI models on Azure as OpenAI worked to develop next-generation computing hardware. In the months that followed, OpenAI released a Microsoft Azure-powered API — OpenAI API — that allows developers to explore GPT-3’s capabilities. In May during its Build 2020 developer conference, Microsoft unveiled what it calls the AI Supercomputer , an Azure-hosted machine co-designed by OpenAI that contains over 285,000 processor cores and 10,000 graphics cards. And toward the end of 2020, Microsoft announced that it would exclusively license GPT-3 to develop and deliver AI solutions for customers, as well as creating new products that harness the power of natural language generation, like Codex. Microsoft last year announced that GPT-3 will be integrated “deeply” with Power Apps , its low-code app development platform — specifically for formula generation. The AI-powered features will allow a user building an ecommerce app, for example, to describe a programming goal using conversational language like “find products where the name starts with ‘kids.'” More recently, Microsoft-owned GitHub launched a feature called Copilot that’s powered by OpenAI’s Codex code generation model, which GitHub says is now being used to write as much as 30% of new code on its network. Certainly, the big winners in the NLP boom are cloud service providers like Azure. According to the John Snow Labs survey, 83% of companies already use NLP APIs from Google Cloud, Amazon Web Services, Azure, and IBM in addition to open source libraries. This represents a sizeable chunk of change, considering the fact that the global NLP market is expected to climb in value from $11.6 billion in 2020 to $35.1 billion by 2026. In 2019, IBM generated $303.8 million in revenue alone from its AI software platforms. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,530
2,019
"Microsoft invests $1 billion in OpenAI to develop AI technologies on Azure | VentureBeat"
"https://venturebeat.com/ai/microsoft-invests-1-billion-in-openai-to-develop-ai-technologies-on-azure"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft invests $1 billion in OpenAI to develop AI technologies on Azure Share on Facebook Share on X Share on LinkedIn From left to right: Former CTO Greg Brockman and chief scientist Ilya Sutskever, speaking at VB Transform in 2019 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft today announced that it would invest $1 billion in OpenAI , the San Francisco-based AI research firm cofounded by CTO Greg Brockman, chief scientist Ilya Sutskever, Elon Musk, and others, with backing from luminaries like LinkedIn cofounder Reid Hoffman and former Y Combinator president Sam Altman. In a blog post, Brockman said the investment will support the development of artificial general intelligence (AGI) — AI with the capacity to learn any intellectual task that a human can — with “widely distributed” economic benefits. To this end, OpenAI intends to partner with Microsoft to jointly develop new AI technologies for the Seattle company’s Azure cloud platform and will enter into an exclusivity agreement with Microsoft to “further extend” large-scale AI capabilities that “deliver on the promise of AGI.” Additionally, OpenAI will license some of its technologies to Microsoft, which will commercialize them and sell them to as-yet-unnamed partners, and OpenAI will train and run AI models on Azure as it works to develop new supercomputing hardware while “adhering to principles on ethics and trust.” “AI is one of the most transformative technologies of our time and has the potential to help solve many of our world’s most pressing challenges,” said Microsoft CEO Satya Nadella. “By bringing together OpenAI’s breakthrough technology with new Azure AI supercomputing technologies, our ambition is to democratize AI — while always keeping AI safety front and center — so everyone can benefit.” According to Brockman, the partnership was motivated in part by OpenAI’s continued pursuit of enormous computational power. Its researchers recently released analysis showing that from 2012 to 2018 the amount of compute used in the largest AI training runs grew by more than 300,000 times, with a 3.5-month doubling time, far exceeding the pace of Moore’s Law. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Perhaps exemplifying the trend is OpenAI’s OpenAI Five, an AI system that squared off against professional players of the video game Dota 2 last summer. On Google’s Cloud Platform — in the course of training — it played 180 years’ worth of games every day on 256 Nvidia Tesla P100 graphics cards and 128,000 processor cores, up from 60,000 cores just a few years ago. “OpenAI is producing a sequence of increasingly powerful AI technologies, which requires a lot of capital,” Brockman said. “The most obvious way to cover costs is to build a product, but that would mean changing our focus.” OpenAI publishes studies in AI subfields from computer vision to natural language processing (NLP), with the stated mission of safely creating superintelligent software. The startup — which began in 2015 as a nonprofit but later restructured as a capped-profit company under OpenAI LP, an investment vehicle — last year detailed an AI robotics system with human-like dexterity. Its Dota 2 bot defeated 99.4% of players in public matches and a team of professional players twice, and its most sophisticated NLP model can generate convincingly humanlike short stories and Amazon reviews from whole cloth. Beyond its flashier projects, OpenAI has contributed to open source tools like Gym , a toolkit for testing and comparing reinforcement learning algorithms that learn to achieve goals from trial and error, and Neural MMO , a “massively multi-agent” virtual training ground that plops agents in the middle of an RPG-like world. Other recent public work includes CoinRun , which tests the adaptability of reinforcement learning agents; Spinning Up , a program designed to teach anyone deep learning; Sparse Transformers, which can predict what comes next in lengthy text, image, and audio sequences; and MuseNet , which generates novel four-minute songs with 10 different instruments across a range of genres and styles. OpenAI is in many ways the stateside counterpart of U.K.-based DeepMind, which Google parent company Alphabet acquired in 2014 for £400 million ($500 million). Since its founding in 2010, DeepMind has — like OpenAI — leaned heavily on computation-heavy techniques to achieve remarkable AI gains in gaming, media synthesis, and medicine. The advancements haven’t come cheap — Wired reports that in 2017 DeepMind burned through £334 million ($442 million). For its part, OpenAI previously secured a $1 billion endowment from its founding members and investors, and OpenAI LP has so far attracted funds from Hoffman’s charitable foundation and Khosla Ventures. The company spent $11.2 million in 2016 , according to its most recently available IRS filing. Brockman and CEO Altman believe that true AGI will be able to master more fields than any one person, chiefly by identifying complex cross-disciplinary connections that elude human experts. Furthermore, they predict that responsibly deployed AGI — in other words, AGI deployed in “close collaboration” with researchers in relevant fields, like social science — might help solve longstanding challenges in climate change, health care, and education. “The creation of [AGI] will be the most important technological development in human history, with the potential to shape the trajectory of humanity,” said Altman. “Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI. We believe it’s crucial that AGI is deployed safely and securely and that its economic benefits are widely distributed.” As for Microsoft, it’s yet another notch in an AI toolbelt comprising everything from research grants and solutions suites like Windows Vision Skills to machine learning-powered productivity features in Office 365. On the product side, the company recently rolled out enhancements to Azure Cognitive Services , a prebuilt service designed to expedite no-code AI model creation, and Azure Machine Learning , a cloud-hosted toolset that facilitates the development of predictive models, classifiers, and recommender systems. Additionally, it launched in preview a software kit for robotics and autonomous physical systems development, and it open-sourced a tool that enables developers to imbue AI systems with explainable components. These updates followed on the heels of high-profile AI collaborations with AT&T , Adobe , and others. Last July, Microsoft said it would team up with Walmart to expedite the retailer’s digital transformation via a combination of AI, cloud, and internet of things (IoT) services, principally by supplying the necessary infrastructure via Azure and applying machine learning services to tasks like routing delivery trucks. Concurrently, the company accelerated its investments in both late-stage and relatively nascent AI startups, contributing to an estimated 72% industry-wide year-over-year uptick in AI and machine learning funding. In June, Microsoft acquired Berkeley, California-based startup Bonsai , which designs deep learning tools aimed at the enterprise. And in November it purchased XOXCO, maker of the Botkit framework that creates conversational bots for team communications chat apps like Slack and Microsoft Teams, months after snatching up Lobe , creator of a platform for building custom deep learning models using a visual interface. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,531
2,023
"Google invests $300 million in Anthropic as race to compete with ChatGPT heats up | VentureBeat"
"https://venturebeat.com/ai/google-invests-300-million-in-anthropic-as-race-to-compete-with-chatgpt-heats-up"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google invests $300 million in Anthropic as race to compete with ChatGPT heats up Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to new reporting from the Financial Times, Google has invested $300 million in one of the most buzzy OpenAI rivals, Anthropic , whose recently-debuted generative AI model Claude is considered competitive with ChatGPT. According to the reporting, Google will take a stake of around 10%. The new funding will value the San Francisco-based company at around $5 billion. The news comes only a little over a week since Microsoft announced a reported $10 billion investment in OpenAI, and signals an increasingly-competitive Big Tech race in the generative AI space. Anthropic founded by OpenAI researchers Anthropic was founded in 2021 by several researchers who left OpenAI, and gained more attention last April when, after less than a year in existence, it suddenly announced a whopping $580 million in funding. Most of that money, it turns out, came from Sam Bankman-Fried and the folks at FTX, the now-bankrupt cryptocurrency platform accused of fraud. There have been questions as to whether that money could be recovered by a bankruptcy court. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Anthropic — and FTX — has also been tied to the Effective Altruism movement, which former Google researcher Timnit Gebru called out recently in a Wired opinion piece as a “dangerous brand of AI safety.” Google will have access to Claude Anthropic’s AI chatbot, Claude — currently available in closed beta through a Slack integration — is reportedly similar to ChatGPT and has even demonstrated improvements. Anthropic, which describes itself as “working to build reliable, interpretable, and steerable AI systems,” created Claude using a process called “Constitutional AI,” which it says is based on concepts such as beneficence, non-maleficence and autonomy. According to an Anthropic paper detailing Constitutional AI , the process involves a supervised learning and a reinforcement learning phase: “As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,532
2,023
"Deci roars into action, releasing hyper-efficient AI models for text and image generation | VentureBeat"
"https://venturebeat.com/business/deci-roars-into-action-releasing-hyper-efficient-ai-models-for-text-and-image-generation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Deci roars into action, releasing hyper-efficient AI models for text and image generation Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney The market of foundational generative AI models — those that are powerful and capable enough to serve a broad swath of use cases, from coding to content generation — is getting more crowded by the day. But Israeli startup Deci is hoping to make a splash in the industry by targeting one very specific and difficult goal: efficiency. Today, the four-year-old company delivered a flurry of blows toward its competitors, launching a duo of open-source foundation models — DeciDiffusion 1.0 , an image-to-text-generator and DeciLM 6B , a text-to-text generator — as well a software development kit (SDK) called Infery LLM, which will allow developers to build applications atop the models, all which are intended for commercial and research purposes. You can demo DeciDiffusion 1.0 and a lite version of the DeciLM 6B on HuggingFace. Efficiency gains and cost savings Importantly: Deci’s entire mission is achieving new standards of efficiency and speed for generative AI inferences — the actual user-facing models — noting that DeciDiffusion is three times faster than direct competitor model Stable Diffusion 1.5, while DeciLM 6B is 15 times faster than Meta’s LLaMA 2 7B. “By using Deci’s open-source generative models and Infery LLM, AI teams can reduce their inference compute costs by up to 80% and use widely available and cost-friendly GPUs such as the NVIDIA A10 while also improving the quality of their offering,” reads the company’s press release. With many in Silicon Valley discussing the apparent shortage of suitable graphics processing units (mostly from market leader Nvidia ) for training and deploying AI models and inferences, Deci’s moves to offer a more power and cost-efficient model — q pair of them — and an SDK, appears to be excellent timing. Deci highlights cost savings in its blog post on DeciDiffusion, writing that it “boasts an impressive reduction of nearly 66% in production costs,” compared to Stable Diffusion 1.5, as well as “costing 70% less than Stable Diffusion for every 10,000 images generated.” Attacking the competition by rebuilding it with AutoNAC Deci says it is able to achieve these awe-inspiring results through its proprietary Neural Architecture Search (AutoNAC) technology which essentially analyzes an existing AI model and constructs an entirely new AI made up of small models “whose overall functionality closely approximates” the original model, according to a Deci whitepaper on the tech. “The AutoNAC pipeline takes as input a user-trained deep neural network, a dataset, and access to an inference platform,” the white paper states. “It then redesigns the user’s neural network to derive an optimized architecture whose latency is typically two to ten times better—without compromising accuracy.” In other words, Deci’s tech can look at whatever models your business or organization currently has deployed, and then completely redesign them to run far faster and more efficiently, vastly reducing the cloud server costs you would have incurred by running the original, larger model. In the case of DeciDiffusion and DeciLM 6B, the models were developed by training on Stable Diffusion 1.5 and Meta’s LLaMA 2 7B, respectively. Deci took advantage of both open source models, applied its own proprietary training architecture to them, and created new, faster, more efficient models that do the same things. Because Deci’s models are also open source, they are free to use, even for commercial purposes. So how does the company plan to monetize? It’s charging for the SDK, of course. “Infery-LLM SDK requires a subscription,” wrote a Deci spokesperson to VentureBeat via email. “Teams can use our open source models with any tool they want and enjoy better performance compared to other models. But to maximize the speed and efficiency to the fullest they can get access to Infery-LLM SDK to optimize and run the models in any environment they choose.” Model specifications DeciDiffusion 1.0 contains 820 parameters, according to Deci’s blog post on the model. It “was trained from scratch on a 320 million-sample subset of the LAION dataset,” and “fine-tuned on a 2 million sample subset of the LAION-ART dataset,” and achieves quality comparable to Stable Diffusion 1.5 with 40% fewer iterations. When it comes to DeciLM 6B , the model includes: 5.7 billion parameters 32 layers 32 heads 4096 tokens sequence length 4096 hidden token size Variable Grouped-Query Attention (GQA) mechanism It was trained on the SlimPijamas dataset using Deci’s AutoNAC methodology, and then “finetuned on a subset of the OpenOrca dataset” to create an even faster, smaller, and more efficient model called DeciLM 6B-Instruct , designed for following short prompts. Both DeciLM 6B and DeciLM 6B-Instruct are available now from Deci. Both DeciDiffusion 1.0 and DeciLM 6B are “intended for commercial and research use in English and can be fine-tuned for use in other languages,” according to their HuggingFace documentation. VentureBeat’s initial test of the DeciDiffusion 1.0 demo produced mixed results: the model struggled, as does Stable Diffusion 1.5, with more complex prompts with multiple elements on the first try. Meanwhile, VentureBeat’s brief test of the DeciLM 6B-Instruct model on HuggingFace yielded more impressive results, delivering mostly accurate summaries of history and a legible cover letter, as seen in the screenshots below. Clearly, Deci hopes to make a compelling offering to enterprises considering open source LLMs and foundation models for their businesses, as well as to the research community, by building upon and advancing from current open source AI models. Whatever happens, it’s an exciting and fiercely competitive time in open source AI, and generative AI more broadly. Correction, Sept. 14, 5:43 pm ET: this article initially quoted the Deci blog stating DeciDiffusion 1.0 reduced costs by 200% instead of 66%. We received word from Deci on the error and update, and updated our piece accordingly. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,533
2,022
"Who owns DALL-E images? Legal AI experts weigh in | VentureBeat"
"https://venturebeat.com/ai/who-owns-dall-e-images-legal-ai-experts-weigh-in"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Who owns DALL-E images? Legal AI experts weigh in Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. When OpenAI announced expanded beta access to DALL-E in July, the company offered paid subscription users full usage rights to reprint, sell and merchandise the images they create with the powerful text-to-image generator. A week later, creative professionals across industries were already buzzing with questions. Topping the list: Who owns images put out by DALL-E, or for that matter, other AI-powered text-to-image generators, such as Google’s Imagen? The owner of the AI that trains the model? Or the human that prompts the AI with words like “red panda wearing a black leather jacket and riding a motorcycle, in watercolor-style?” In a statement to VentureBeat, an OpenAI spokesperson said, “OpenAI retains ownership of the original image primarily so that we can better enforce our content policy.” However, several creative professionals told VentureBeat they were concerned about the lack of clarity around image ownership from tools like DALL-E. Some who work for large agencies or brands said those issues might be too uncertain to warrant using the tools for high-profile client work. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Bradford Newman, who leads the machine learning and AI practice of global law firm Baker McKenzie, in its Palo Alto office, said the answer to the question “Who owns DALL-E images?” is far from clear. And, he emphasized, legal fallout is inevitable. “If DALL-E is adopted in the way I think [Open AI] envisions it, there’s going to be a lot of revenue generated by the use of the tool,” he said. “And when you have a lot of players in the market and issues at stake, you have a high chance of litigation.” Big stakes get litigated for case-specific answers Mark Davies, partner at Orrick, agreed there are many open legal questions when it comes to AI. “What happens in reality is when there are big stakes, you litigate it,” he said. “And then you get the answers in a case-specific way.” In the context of text-to-image generators and the resulting creations, the question is mostly about what’s “fair use,” he explained. Under U.S. copyright law, fair use is a “legal doctrine that promotes freedom of expression by permitting the unlicensed use of copyright-protected works in certain circumstances.” In a technology context, the most recent, and biggest, case example was 2021’s Google LLC v. Oracle America, Inc. In a 6-2 decision, the Supreme Court held that Google’s use of Oracle’s code amounted to a fair use under United States copyright law. As a result, the Court did not consider the question as to whether the material copied was protected by copyright. One big lesson from that case was that these disputes will be decided by the courts, Davies emphasized. “The idea that we’re going to get some magical solution from a different place is just not how the legal system really works,” he said. However, he added, for an issue like that around DALL-E image ownership to move forward, it often needs two parties with a lot at stake, because litigation is so expensive. “It does take a core disagreement on something really important for these rules to develop,” he said. And it has happened in the past, he added, with advances as varied as Morse code, railroads, smartphones and the internet. “I think when you are living through technological change, it feels unique and special,” he said. “But the industrial revolution happened. It got sorted out.” Contradictory statements from Open AI on DALL-E? Still, some experts say Open AI’s statements about the use of DALL-E – that the company owns the images but users can commercialize them – are confusing and contradictory. Jim Flynn, a senior partner and managing director at Epstein Becker and Green, said they struck him as “a little give with one hand, take away with the other.” The thing is, both sides have fairly good claims and arguments, he pointed out. “Ultimately, I think the people who own this AI process make a fairly good claim that they would have some ownership rights,” he said. “This image was created by the simple input of some basic commands from a third party.” On the other hand, an argument could be made that the use of DALL-E is similar to using a digital camera, he added — an example where images are created but the camera manufacturers do not own the rights to user photos. In addition, if those the technology companies that own text-to-image generators also own image output, it would be “viscerally unsatisfactory” to many who believe that if they buy or license a process like DALL-E, they should own what they created — particularly if they paid for the right to use it in the exact same manner as the AI company promoted them to use it. “If I were representing one of the advertising agencies, or the clients of the advertising agencies, I wouldn’t advise them to use this software to create a campaign, because I do think the AI provider would [currently] have some claims to the intellectual property,” he said. “I’d be looking to negotiate something more definitive.” The future of DALL-E image ownership While there are arguments on both sides of the DALL-E ownership question, as well as many historical analogies, Flynn does not necessarily think the law needs to change to address them. “But will [the law] change? Yes, I think it will, because there are a lot of people, especially in the AI community, who have some interest that isn’t really related to copyright or intellectual property,” he said. “I think the interest in it isn’t being driven because of complex legal issues but to push the issue of AI as having the ability to create, to have a separate consciousness. Because so much else in our society finds its way to court to get determined, that’s why these cases are out there.” Flynn predicts a shakeout that leads to a new consensus around who owns AI-generated creations, that will be driven by economic forces that the law follows. “That’s what happened with things like email correspondence and legal privilege, and frankly, that’s what happened with the digital camera,” he said. He added that he would tell clients that if they want to use AI-generated creations, it will be best to use a purveyor that is the equivalent of stock photo site Shutterstock, that offers a certain number of licenses for an annual fee. “But the reality is, you’re also going to get big advertising agencies that are probably going to either develop their own [text-to-image AI], or license AI at the institutional level from some API provider to create advertising,” he said. “And the ad agency will pay the AI creator some amount of money and use it for clients. There certainly are models out there that this fits with.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,534
2,023
"Adobe integrates generative AI directly into Photoshop with new Firefly capabilities | VentureBeat"
"https://venturebeat.com/ai/adobe-integrates-generative-ai-directly-into-photoshop-with-new-firefly-capabilities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Adobe integrates generative AI directly into Photoshop with new Firefly capabilities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For the first time, Adobe has integrated generative AI into its flagship product Photoshop. In a beta release, the company unveiled Generative Fill, bringing its Adobe Firefly generative AI capabilities directly into design workflows. The development continues Adobe’s broad effort to inject more artificial intelligence into its creative products, particularly with Firefly, a new family of creative generative AI models which was introduced in March. In addition to generating images from text prompts, Generative Fill automatically matches the perspective, lighting and style of images, while users can add, extend or remove newly-generated content in generative layers, allowing for rapid iteration. So when adding a puddle and floating bubbles to a photo of a corgi, for example, the puddle shows the reflection of the corgi and the bubbles appear transparent. Supercharging Adobe Photoshop with generative AI Generative Fill “uses the native powers of Photoshop and supercharges it with generative AI,” Maria Yap, vice president of digital imaging at Adobe, told VentureBeat. She explained that by bringing Firefly capabilities directly into Photoshop, users who felt nervous about using generative AI can realize they remain fully in control of their creativity — that it is simply another tool in their arsenal. “Our customers are excited because Firefly is a commercially safe model, using high-quality images, making sure that there’s no copyright infringement in the creation process,” she said. “Then they’re really seeing the power of it — the reaction has been so incredible, we’ll literally hear comments like, ‘My jaw is on the desk, I cannot believe the power of this.'” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Yap added that she has been at Adobe for 25 years and working with Photoshop for nearly 18 years. “To be frank, it’s been a decade since I felt a moment like this where I felt people are just going to be shocked, surprised and delighted.” Building on previous Adobe AI capabilities In a press release, Adobe emphasized its decade-long history of delivering AI capabilities through its Adobe Sensei technology, including features like Neural Filters in Photoshop, Content Aware Fill in After Effects, Attribution AI in Adobe Experience Platform and Liquid Mode in Acrobat. But since its launch six weeks ago, Adobe Firefly has become one of the company’s most successful beta launches, with beta users generating over 100 million assets. The company says that Firefly is the only AI service that generates commercially viable, professional-quality content, and is designed to be embedded directly into creators’ workflows. Firefly’s first model is trained on Adobe Stock images, openly licensed content and other public domain content without copyright restrictions. Enterprises will be able to extend Firefly with their own creative collateral in order to generate content that includes the company’s images, vectors and brand language. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,535
2,021
"Data labeling for AI research is highly inconsistent, study finds | VentureBeat"
"https://venturebeat.com/business/data-labeling-for-ai-research-is-highly-inconsistent-study-finds"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data labeling for AI research is highly inconsistent, study finds Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Supervised machine learning, in which machine learning models learn from labeled training data, is only as good as the quality of that data. In a study published in the journal Quantitative Science Studies , researchers at consultancy Webster Pacific and the University of California, San Diego and Berkeley investigate to what extent best practices around data labeling are followed in AI research papers, focusing on human-labeled data. They found that the types of labeled data range widely from paper to paper and that a “plurality” of the studies they surveyed gave no information about who performed labeling — or where the data came from. While labeled data is usually equated with ground truth, datasets can — and do — contain errors. The processes used to build them are inherently error-prone, which becomes problematic when these errors reach test sets, the subsets of datasets researchers use to compare progress. A recent MIT paper identified thousands to millions of mislabeled samples in datasets used to train commercial systems. These errors could lead scientists to draw incorrect conclusions about which models perform best in the real world, undermining benchmarks. The coauthors of the Q uantitative Science Studies paper examined 141 AI studies across a range of different disciplines, including social sciences and humanities, biomedical and life sciences, and physical and environmental sciences. Out of all of the papers, 41% tapped an existing human-labeled dataset, 27% produced a novel human-labeled dataset, and 5% didn’t disclose either way. (The remaining 27% used machine-labeled datasets.) Only half of the projects using human-labeled data revealed whether the annotators were given documents or videos containing guidelines, definitions, and examples they could reference as aids. Moreover, there was a “wide variation” in the metrics used to rate whether annotators agreed or disagreed with particular labels, with some papers failing to note this altogether. Compensation and reproducibility As a previous study by Cornell and Princeton scientists pointed out, a major venue for crowdsourcing labeling work is Amazon Mechanical Turk, where annotators mostly originate from the U.S. and India. This can lead to an imbalance of cultural and social perspectives. For example, research has found that models trained on ImageNet and Open Images, two large, publicly available image datasets, perform worse on images from Global South countries. Images of grooms are classified with lower accuracy when they come from Ethiopia and Pakistan compared to images of grooms from the U.S. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For annotators, labeling tasks tend to be monotonous and low-paying — ImageNet workers made a median of $2 per hour in wages. Unfortunately, the Q uantitative Science Studies survey shows that the AI field leaves the issue of fair compensation largely unaddressed. Most publications didn’t indicate what type of reward they offered to labelers or even include a link to the training dataset. Beyond doing a disservice to labelers, the lack of links threatens to exacerbate the reproducibility problem in AI. At ICML 2019, 30% of authors failed to submit code with their papers by the start of the conference. And one report found that 60% to 70% of answers given by natural language processing models were embedded somewhere in the benchmark training sets, indicating that the models were often simply memorizing answers. “Some of the papers we analyzed described in great detail how the people who labeled their dataset were chosen for their expertise, from seasoned medical practitioners diagnosing diseases to youth familiar with social media slang in multiple languages. That said, not all labeling tasks require years of specialized expertise, such as more straightforward tasks we saw, like distinguishing positive versus negative business reviews or identifying different hand gestures,” the coauthors of the Quantitative Science Studies paper wrote. “Even the more seemingly straightforward classification tasks can still have substantial room for ambiguity and error for the inevitable edge cases, which require training and verification processes to ensure a standardized dataset.” Moving forward The researchers avoid advocating for a single, one-size-fits-all solution to human data labeling. However, they call for data scientists who choose to reuse datasets to exercise as much caution around the decision as they would if they were labeling the data themselves — lest bias creep in. An earlier version of ImageNet was found to contain photos of naked children, porn actresses, and college parties, all scraped from the web without those individuals’ consent. Another popular dataset, 80 Million Tiny Images, was taken offline after an audit surfaced racist, sexist, and otherwise offensive annotations, such as nearly 2,000 images labeled with the N-word and labels like “rape suspect” and “child molester.” “We see a role for the classic principle of reproducibility, but for data labeling: does the paper provide enough detail so that another researcher could hypothetically recruit a similar team of labelers, give them the same instructions and training, reconcile disagreements similarly, and have them produce a similarly labeled dataset?” the researchers wrote. “[Our work gives] evidence to the claim that there is substantial and wide variation in the practices around human labeling, training data curation, and research documentation … We call on the institutions of science — publications, funders, disciplinary societies, and educators — to play a major role in working out solutions to these issues of data quality and research documentation.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,536
2,023
"Our structure"
"https://openai.com/our-structure"
"Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Our structure We designed OpenAI’s structure—a partnership between our original Nonprofit and a new capped profit arm—as a chassis for OpenAI’s mission: to build artificial general intelligence (AGI) that is safe and benefits all of humanity. Updated June 28, 2023 We announced our “capped profit” structure in 2019, about three years after founding the original OpenAI Nonprofit. Since the beginning, we have believed that powerful AI, culminating in AGI—meaning a highly autonomous system that outperforms humans at most economically valuable work—has the potential to reshape society and bring tremendous benefits, along with risks that must be safely addressed. The increasing capabilities of present day systems mean it’s more important than ever for OpenAI and other AI companies to share the principles, economic mechanisms, and governance models that are core to our respective missions and operations. Overview We founded the OpenAI Nonprofit in late 2015 with the goal of building safe and beneficial artificial general intelligence for the benefit of humanity. A project like this might previously have been the provenance of one or multiple governments—a humanity-scale endeavor pursuing broad benefit for humankind. Seeing no clear path in the public sector, and given the success of other ambitious projects in private industry (e.g., SpaceX, Cruise, and others), we decided to pursue this project through private means bound by strong commitments to the public good. We initially believed a 501(c)(3) would be the most effective vehicle to direct the development of safe and broadly beneficial AGI while remaining unencumbered by profit incentives. We committed to publishing our research and data in cases where we felt it was safe to do so and would benefit the public. We always suspected that our project would be capital intensive, which is why we launched with the goal of $1 billion in donation commitments. Yet over the years, OpenAI’s Nonprofit received approximately $130.5 million in total donations, which funded the Nonprofit’s operations and its initial exploratory work in deep learning, safety, and alignment. It became increasingly clear that donations alone would not scale with the cost of computational power and talent required to push core research forward, jeopardizing our mission. So we devised a structure to preserve our Nonprofit’s core mission, governance, and oversight while enabling us to raise the capital for our mission: The OpenAI Nonprofit would remain intact, with its board continuing as the overall governing body for all OpenAI activities. A new for-profit subsidiary would be formed, capable of issuing equity to raise capital and hire world class talent, but still at the direction of the Nonprofit. Employees working on for-profit initiatives were transitioned over to the new subsidiary. The for-profit would be legally bound to pursue the Nonprofit’s mission, and carry out that mission by engaging in research, development, commercialization and other core operations. Throughout, OpenAI’s guiding principles of safety and broad benefit would be central to its approach. The for-profit’s equity structure would have caps that limit the maximum financial returns to investors and employees to incentivize them to research, develop, and deploy AGI in a way that balances commerciality with safety and sustainability, rather than focusing on pure profit-maximization. The Nonprofit would govern and oversee all such activities through its board in addition to its own operations. It would also continue to undertake a wide range of charitable initiatives, such as sponsoring a comprehensive basic income study, supporting economic impact research , and experimenting with education-centered programs like OpenAI Scholars. Over the years, the Nonprofit also supported a number of other public charities focused on technology, economic impact and justice, including the Stanford University Artificial Intelligence Index Fund, Black Girls Code, and the ACLU Foundation. In that way, the Nonprofit would remain central to our structure and control the development of AGI, and the for-profit would be tasked with marshaling the resources to achieve this while remaining duty-bound to pursue OpenAI’s core mission. The primacy of the mission above all is encoded in the operating agreement of the for-profit, which every investor and employee is subject to: The structure in more detail While investors typically seek financial returns, we saw a path to aligning their motives with our mission. We achieved this innovation with a few key economic and governance provisions: First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary. Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors. Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time. Fourth, profit allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity. Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology. We strive to preserve these core governance and economic components of our structure when exploring opportunities to accelerate our work. Indeed, given the path to AGI is uncertain, our structure is designed to be adaptable—we believe this is a feature, not a bug. Microsoft Shortly after announcing the OpenAI capped profit structure (and our initial round of funding) in 2019, we entered into a strategic partnership with Microsoft. We subsequently extended our partnership, expanding both Microsoft’s total investment as well as the scale and breadth of our commercial and supercomputing collaborations. While our partnership with Microsoft includes a multibillion dollar investment, OpenAI remains an entirely independent company governed by the OpenAI Nonprofit. Microsoft has no board seat and no control. And, as explained above, AGI is explicitly carved out of all commercial and IP licensing agreements. These arrangements exemplify why we chose Microsoft as our compute and commercial partner. From the beginning, they accepted our capped equity offer and our request to leave AGI technologies and governance for the Nonprofit and the rest of humanity. They have also worked with us to create and refine our joint safety board that reviews our systems before they are deployed. Harkening back to our origins, they understand that this is a unique and ambitious project that requires resources at the scale of the public sector, as well as the very same conscientiousness to share the ultimate results with everyone. Our board OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner. Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top "
14,537
2,023
"Doomer AI advisor joins Musk's xAI, the 4th top research lab focused on AI apocalypse | VentureBeat"
"https://venturebeat.com/ai/doomer-advisor-joins-musks-xai-the-4th-top-research-lab-focused-on-ai-apocalypse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Doomer AI advisor joins Musk’s xAI, the 4th top research lab focused on AI apocalypse Share on Facebook Share on X Share on LinkedIn Image: Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Elon Musk has brought on Dan Hendrycks , a machine learning researcher who serves as the director of the nonprofit Center for AI Safety , as an advisor to his new startup, xAI. The Center for AI Safety sponsored a Statement on AI Risk in May that was signed by the CEOs of OpenAI, DeepMind, Anthropic and hundreds of other AI experts. The organization receives over 90% of its funding via Open Philanthropy , a nonprofit run by a couple (Dustin Moskovitz and Cari Tuna) prominent in the controversial Effective Altruism (EA) movement. EA is defined by the Center for Effective Altruism as “an intellectual project, using evidence and reason to figure out how to benefit others as much as possible.” According to numerous EA adherents, the paramount concern facing humanity revolves around averting a catastrophic scenario where an AGI created by humans eradicates our species. Excited to help advise on AI safety https://t.co/zxbIExFo56 Musk’s appointment of Hendrycks is significant because it is the clearest sign yet that four of the world’s most famous and well-funded AI research labs — OpenAI, DeepMind, Anthropic and now xAI — are bringing these kinds of existential risk, or x-risk, ideas about AI systems to the mainstream public. Many AI experts have complained about x-risk focus That is the case even though many top AI researchers and computer scientists do not agree that this “doomer” narrative deserves so much attention. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For example, Sara Hooker, head of Cohere for AI , told VentureBeat in May that x-risk “was a fringe topic.” And Mark Riedl, professor at the Georgia Institute of Technology, said that existential threats are “often reported as fact,” which he added “goes a long way to normalizing, through repetition, the belief that only scenarios that endanger civilization as a whole matter and that other harms are not happening or are not of consequence.” NYU AI researcher and professor Kyunghyun Cho agreed, telling VentureBeat in June that he believes these “doomer narratives” are distracting from the real issues, both positive and negative, posed by today’s AI. “I’m disappointed by a lot of this discussion about existential risk; now they even call it literal ‘extinction,'” he said. “It’s sucking the air out of the room.” Other AI experts have also pointed out, both publicly and privately, that they are concerned by the companies’ publicly-acknowledged ties to the EA community — which is supported by tarnished tech figures like FTX’s Sam Bankman -Fried — as well as various TESCREAL movements such as longtermism and transhumanism. “I am very aware of the fact that the EA movement is the one that is actually driving the whole thing around AGI and existential risk,” Cho told VentureBeat. “I think there are too many people in Silicon Valley with this kind of savior complex. They all want to save us from the inevitable doom that only they see and they think only they can solve.” Timnit Gebru, in a Wired article last year, pointed out that Bankman-Fried was was one of EA’s largest funders until the recent bankruptcy of his FTX cryptocurrency platform. Other billionaires who have contributed big money to EA and x-risk causes include Elon Musk , Vitalik Buterin , Ben Delo , Jaan Tallinn , Peter Thiel and Dustin Muskovitz. As a result, Gebru wrote, “all of this money has shaped the field of AI and its priorities in ways that harm people in marginalized groups while purporting to work on ‘beneficial artificial general intelligence’ that will bring techno utopia for humanity. This is yet another example of how our technological future is not a linear march toward progress but one that is determined by those who have the money and influence to control it.” Here is a rundown of where this tech quartet stands when it comes to AGI, x-risk and Effective Altruism: xAI: ‘Understand the true nature of the universe’ Mission: Engineer an AGI to “understand the universe” Focus on AGI and x-risk: Elon Musk, who helped found OpenAI in 2015, reportedly left that startup because he felt it wasn’t doing enough to develop AGI safely. He also played a key role in convincing AI leaders to sign Hendrycks’ Statement on AI Risk that says “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Musk developed xAI, he has said , because he believes a smarter AGI will be less likely to destroy humanity. “The safest way to build an AI is actually to make one that is maximally curious and truth-seeking,” he said in a recent Twitter Spaces talk. Ties to Effective Altruism : Musk himself has claimed that writings about EA by one of its originators, philosopher William MacAskill, are “ a close match for my philosophy. ” As for Hendrycks, according to a recent Boston Globe interview, he “claims he was never an EA adherent, even if he brushed up against the movement,” and says that “AI safety is a discipline that can, and does, stand apart from effective altruism.” Still, Hendrycks receives funding from Open Philanthrop y and has said he became interested in AI safety because of his participation in 80,000 Hours , a career exploration program associated with the EA movement. OpenAI: ‘Creating safe AGI that benefits all of humanity’ Mission: In 2015, OpenAI was founded with a mission to “ensure that artificial general intelligence benefits all of humanity.” OpenAI’s website notes : “We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.” Focus on AGI and x-risk: Since its founding, OpenAI has never wavered from its AGI-focused mission. It has posted many blog posts over the past year with titles like “Governing Superintelligence,” “Our Approach to AI Safety,” and “Planning for AGI and Beyond.” Earlier this month, OpenAI announced a new “superalignment team” with a goal to “solve the core technical challenges of superintelligence alignment in four years.” The company said its cofounder and chief scientist Ilya Sutskever will make this research his core focus, and the company said it would dedicate 20% of its compute resources to its superalignment team. One team member recently called it the “notkilleveryoneism” team: 1) Yes, this is the notkilleveryoneism team. ( @AISafetyMemes …) Ties to Effective Altruism: In March 2017, OpenAI received a grant of $30 million from Open Philanthropy. In 2020, MIT Technology Review’s Karen Hao reported that “the company has an impressively uniform culture. The employees work long hours and talk incessantly about their jobs through meals and social hours; many go to the same parties and subscribe to the rational philosophy of Effective A ltruism. ” These days, the company’s head of alignment, Jan Leike, who leads the superalignment team, reportedly identifies with the EA movement. And while OpenAI CEO Sam Altman has criticized EA in the past, particularly in the wake of the Sam Bankman-Fried scandal, he did complete the 80,000 Hours course, which was created by EA originator William MacAskill. but though i think EA is an incredibly flawed movement, i will say: as individuals, EAs are almost always exceptionally nice, well-meaning people. the movement has some very weird emergent behavior, but i'm happy to see the self-reflection and feel confident it'll emerge better. Google DeepMind: ‘Solving intelligence to advance science and benefit humanity’ Mission: “To unlock answers to the world’s biggest questions by understanding and recreating intelligence itself.” Focus on AGI and x-risk: DeepMind was founded in 2010 by Demis Hassabis, Shane Legg and Mustafa Suleyman, and in 2014 the company was acquired by Google. In 2023, DeepMind merged with Google Brain to form Google DeepMind. Its AI research efforts, which have often focused on reinforcement learning through game challenges such as its AlphaGo program, has always had a strong focus on an AGI future: “By building and collaborating with AGI we should be able to gain a deeper understanding of our world, resulting in significant advances for humanity,” the company website says. A recent interview with CEO Hassabis in the Verge said that “Demis is not shy that his goal is building an AGI, and we talked through what risks and regulations should be in place and on what timeline.” Ties to Effective Altruism: DeepMind researchers like Rohin Shah and Sebastian Farquar identify as Effective Altruists, while Hassabis has spoken at EA conferences, and groups from DeepMind have attended the Effective Altruism Global Conference. Also, Pushmeet Kohli, principal scientist and research team leader at DeepMind, has been interviewed about AI safety on the 80,000 Hours podcast. Anthropic: ‘AI research and products that put safety at the frontier’ Mission: According to Anthropic’s website, its mission is to “ensure transformative AI helps people and society flourish. Progress this decade may be rapid, and we expect increasingly capable systems to pose novel challenges. We pursue our mission by building frontier systems, studying their behaviors, working to responsibly deploy them, and regularly sharing our safety insights. We collaborate with other projects and stakeholders seeking a similar outcome.” Focus on AGI and x-risk: Anthropic was founded in 2021 by several former employees at OpenAI who objected to OpenAI’s direction (such as its relationship with Microsoft) — including Dario Amodei, who served as OpenAI’s vice president of research and is now Anthropic’s CEO. According to a recent in-depth New York Times article called “Inside the White-Hot Center of AI Doomerism,” Anthropic employees are very concerned about x-risk: “Many of them believe that AI models are rapidly approaching a level where they might be considered artificial general intelligence, or AGI, the industry term for human-level machine intelligence. And they fear that if they’re not carefully controlled, these systems could take over and destroy us.” Ties to Effective Altruism: Anthropic has some of the clearest ties to the EA community of any of the big AI labs. “No major AI lab embodies the EA ethos as fully as Anthropic,” said the New York Times piece. “Many of the company’s early hires were effective altruists, and much of its start-up funding came from wealthy EA-affiliated tech executives, including Dustin Moskovitz, a co-founder of Facebook, and Jaan Tallinn, a co-founder of Skype.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,538
2,023
"How AI can mitigate supply chain issues | VentureBeat"
"https://venturebeat.com/ai/how-ai-can-mitigate-supply-chain-issues"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How AI can mitigate supply chain issues Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The supply chain crisis has been much in the news of late. It’s not difficult to understand why. The crisis has had a profound impact across industries and throughout the global economy. It has contributed to surging prices, layoffs, productivity declines and empty store shelves. However, there is hope on the horizon and it is coming in the form of Artificial Intelligence (AI). The technology is improving the supply chain in a myriad of ways, from optimizing inventory management to enhancing warehousing and storage processes to automating critical elements of the supply chain. If properly executed, supply chain AI has the ability to improve logistics drastically at a time when every minute counts. Early adopters of AI in supply chain management saw a decrease in logistics costs of 15%, an increase in inventory levels of 35%, and a boost in service levels of 65%. This automation and optimization could be the difference between a business thriving or floundering when supply issues arise. Optimizing inventory management Inventory management is often both an art and a science. It requires decision-makers to maintain constant oversight of existing inventory levels while anticipating future needs. It mandates that managers cultivate sufficient knowledge of market trends and customer behaviors to be able to identify the sweet spot in inventory planning, ensuring there are always sufficient supplies of necessary products and materials while preventing surpluses and waste. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This is a challenging process, one that can have significant ramifications for the supply chain , as effective inventory management prevents clogging the supply line with rush shipments or with superfluous transports. This is where the power of AI shines through. AI-driven technologies can provide continuous surveillance of warehouse, retail and industry inventories, and can autonomously order new materials when supply levels reach a critical level. Perhaps even more significantly, the machine learning (ML) capabilities of AI technologies mean that decision-makers will have more timely, relevant and plentiful data with which to plan inventory needs. This includes robust capabilities for accruing data on market trends, customer behaviors and related metrics to predict short-term and long-range supply needs. Supporting transport, warehousing, and storage Another significant challenge impacting the supply chain is the necessity of ensuring not only that materials reach their intended destination in a timely manner, but that supplies are in optimal condition when they get there. This is no mean feat, particularly when transporting fragile materials across a continent or around the world. AI-powered sensors, though, can track individual shipments, as well as discrete items within each shipment, at every phase of transport, reducing the risk of lost or misdirected shipments. However, this is only the beginning of the story, as AI sensors aren’t just adept at tracking location. They can also provide accurate, comprehensive, and relevant data on environmental conditions across the entire supply chain , including warehousing, storage, and transport containers. This is an especially important asset for reducing risk in cold chain shipping and storage. Materials that need to be maintained at a specific temperature or humidity level — such as perishable foods, medications, or certain electronics — may be rendered dangerous or unusable if there is a failure in a cargo container’s or warehouse’s refrigeration systems. AI sensors can send alerts to stakeholders when environmental conditions begin to approach unsafe parameters, allowing them to take action before inventory is lost. This capability can also significantly increase trust among stakeholders by enhancing visibility and transparency across the supply chain. Automating processes Because ML enables AI to “learn” from each action it performs, the capacity to automate processes increases substantially over time. This means that not only are workflows less dependent on human labor , but they are more accurate and reliable than the product of human work. Human error is a simple fact of life. People get tired. They make mistakes. They have physical and cognitive limitations. AI, however, never tires. It generally only makes mistakes when it has been programmed incorrectly. Its “intelligence” increases exponentially over time. What this means is that when you automate elements of the supply chain using AI technologies, you’re going to get greater efficiency, accuracy, and productivity than even the most skilled humans. In addition, as the COVID-19 pandemic has shown us that human vulnerabilities can jeopardize not only their well-being but the health of the supply chain. Pervasive and prolonged lockdowns threw the entire global economy into turmoil, decimated once-successful businesses, and threatened the livelihoods of millions of workers. Using AI to automate the supply chain means that work can continue to flow, businesses can continue to operate, and products can continue to be produced and consumed should another pandemic or other global crisis emerge. If implemented correctly, business leaders and employees may never again have to face the terrible choice between their health and safety and their career and income. Making business decisions with AI Of course, implementing these AI practices is easier said than done. For one, there may be overhead costs to consider. For another, employees may be concerned that new AI systems will take over their jobs, especially if they work in the industrial sector. To address such concerns, it’s best to approach AI with the following steps: Learn more about AI : As a business leader, the more you understand AI, the more equipped you’ll be at pointing out solutions you can apply to your business. You may even create new solutions of your own. As such, stay up to date with the latest AI technology announcements. Pitch a new AI system to a team of leaders : This team may be your corporate leaders or simply a managerial team. Within this pitching process, you’ll also address any overhead costs. If you find these costs can’t be budgeted for, you can either go back to the drawing board to fit them in or scrap the idea altogether. Announce your plans to your employees : Approach this with sensitivity and be open to feedback. Most automation is typically for the betterment of employees and their safety, so it’s best to also communicate this. Be adaptable : You’ll inevitably have to change plans at several points in your pitching process. If this happens, be open to new ideas, especially if it’s to solve supply chain issues. Even if this pitching process falls flat, hold out hope. AI is still a relatively new technology, and it may take time for your company and your employees to accept it with open arms. The takeaway The ongoing supply chain crisis has taken a profound toll on businesses, workers, and consumers alike. However, AI innovations may make such crises a thing of the past. AI technologies are proving highly beneficial across all stages of the supply chain. They optimize inventory management, enhance warehousing and storage and support process automation — all to spur efficiency and productivity, prevent human error, and protect the supply chain from future crises. Charlie Fletcher is a freelance writer covering tech and business. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,539
2,023
"Nvidia CEO highlights accelerated computing and AI’s role in chip manufacturing at ITF World 2023 | VentureBeat"
"https://venturebeat.com/ai/nvidia-ceo-highlights-accelerated-computing-and-ais-role-in-chip-manufacturing-at-itf-world-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia CEO highlights accelerated computing and AI’s role in chip manufacturing at ITF World 2023 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In his keynote address at the ITF World 2023 semiconductor conference, Jensen Huang , founder and CEO of Nvidia, emphasized the profound impact of accelerated computing and artificial intelligence (AI) in the chip manufacturing industry. In his video presentation, Huang provided a comprehensive overview of the latest computing advancements propelling industries worldwide. Huang highlighted the potential of Nvidia’s accelerated computing and AI solutions in chipmaking. He emphasized their intersection with semiconductor manufacturing. He also stressed the need for a new approach to meet the rising demand for computing power while addressing concerns regarding net-zero goals. “We are experiencing two simultaneous platform transitions — accelerated computing and generative AI ,” Huang said. “I am thrilled to see Nvidia accelerated computing and AI in service of the world’s chipmaking industry.” As an example of how AI and accelerated computing are transforming the technology industry, Huang explained that to achieve advanced chip manufacturing, over 1,000 precise steps must be executed to create features the size of a biomolecule, with each step executed perfectly to ensure functional output. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Sophisticated computational sciences are performed at every stage to compute the features to be patterned and to do defect detection for in-line process control,” Huang said. “Chip manufacturing is an ideal application for Nvidia accelerated and AI computing.” Furthermore, Huang recognized that while the exponential performance growth of central processing units (CPUs) had primarily driven the technology industry for almost four decades, CPU design has reached a state of maturity, resulting in a slowdown in the rate at which semiconductors enhance their power and efficiency. Leveraging accelerated computing to streamline tech development Huang noted Nvidia’s pioneering efforts in accelerated computing, a groundbreaking approach that combines the parallel processing capabilities of graphics processing units (GPUs) with CPUs. Nvidia, he said, is well suited to tackle today’s computational science challenges, and his company’s accelerated computing is fueling the AI revolution. He cited several examples of how Nvidia GPUs are increasingly crucial in chip manufacturing. Companies like D2S, IMS Nanofabrication and NuFlare are using Nvidia GPUs to accelerate pattern rendering and mask process correction in the creation of photomasks — stencils used to transfer patterns onto wafers using electron beams. Meanwhile, semiconductor manufacturers KLA , Applied Materials and Hitachi High-Tech are incorporating Nvidia GPUs into their e-beam and optical wafer inspection and review systems. “We have already accelerated the processing by 50 times,” Huang said. “Tens of thousands of CPU servers can be replaced by a few hundred Nvidia DGX systems, reducing power and cost by an order of magnitude.” In March, Nvidia launched cuLitho, a software library that offers optimized tools and algorithms for computational lithography, accelerated by GPUs. The future of AI and digital innovations Huang stressed the far-reaching influence of AI and accelerated computing, extending beyond chip manufacturing to permeate the entire technology industry. He acknowledged that the concurrent shifts in accelerated computing and generative AI are shaping the future of the technological landscape. Looking to the future, Huang referred to the next wave of AI as “embodied AI” — intelligent systems capable of understanding, reasoning and interacting with the physical world. He cited robotics, autonomous vehicles, and chatbots with heightened physical world comprehension as examples of this technology. To demonstrate advancements in embodied AI, Huang unveiled Nvidia VIMA, a multimodal embodied AI system capable of carrying out intricate tasks guided by visual text prompts. Through acquiring concepts, comprehending boundaries and even emulating physics, VIMA signifies a notable progression in AI capabilities. Huang also revealed Nvidia’s Earth-2 project, designed to develop a digital replica of the planet. Earth-2 will forecast weather patterns, provide long-range climate predictions and ultimately contribute to the search for affordable and environmentally friendly energy solutions. This endeavor employs FourCastNet, a physics-AI model that rapidly simulates global weather patterns. These systems hold great potential for addressing pressing issues of our era, including the demand for sustainable energy solutions. “The reactor plasma physics-AI runs on Nvidia AI, and its digital twin runs in Nvidia Omniverse,” Huang said. “Such systems hold promise for further advancements in the semiconductor industry. I look forward to physics-AI, robotics, and Omniverse-based digital twins helping to advance the future of chip manufacturing.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,540
2,023
"Zendesk AI: Leveling up with generative AI for a more intuitive and intelligent CX platform | VentureBeat"
"https://venturebeat.com/ai/zendesk-ai-leveling-up-with-generative-ai-for-a-more-intuitive-and-intelligent-cx-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zendesk AI: Leveling up with generative AI for a more intuitive and intelligent CX platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Zendesk is expanding the use of artificial intelligence (AI) across its customer experience (CX) platform with the release today of a series of new capabilities the company is branding as Zendesk AI. The new Zendesk AI capabilities include advanced bots for handling customer inquiries across a range of industries. Customer service agents will now also benefit from AI-powered assistance in responding to inquiries. Additionally, the system integrates intelligent triage capabilities that use sentiment analysis and intent detection to help route requests properly. While many of the Zendesk AI capabilities have been built by the company’s own teams, the new services have also benefited from a healthy dose of generative AI via a partnership with OpenAI. The generative AI component will fit in across the Zendesk AI suite, helping to generate responses and summarize content. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We believe we need to make AI easy to understand and use and accessible to everyone,” Cristina Fonseca, head of AI at Zendesk, told VentureBeat. What generative AI adds to Zendesk Zendesk was no stranger to AI prior to its partnership with OpenAI. Zendesk took a big step into AI in 2021 with the the acquisition of Portugal-based startup Cleverly A I, where Fonseca was formerly the CEO. In September 2022, Zendesk announced a major update integrating the Cleverly AI technology to help speed up customer service responses. Fonseca said today’s Zendesk AI release is an evolution of what the company announced in September 2022, which was targeted specifically at the retail industry. She noted that the new release is available to a broader group of customers. Zendesk has offered bot capabilities for some time, but Fonseca said OpenAI’s generative AI foundation has significantly expanded those capabilities. “We believe software should be intelligent off the shelf and, for most bots to work, you need to spend a lot of time training them,” she said. “This is one of the main features we are now launching — with bots that are pretrained and already understand customers.” The generative AI will also help power a revamped set of capabilities for increased customer service agent productivity. Zendesk AI uses OpenAI’s technology to support summation and sentiment analysis for inquiries that, in turn, help agents respond effectively. The ability to assist agents with creating replies is also part of the new update. Why generative AI alone isn’t enough for CX Fonseca emphasized that while Zendesk AI is making use of generative AI from OpenAI, it is not abandoning its Cleverly AI roots. Intelligent triage, for example, is a foundational element the product uses that was developed by Cleverly AI for understanding customer intent, sentiment and language to appropriately direct an inquiry. For that system, Zendesk has its own proprietary models that were trained on Zendesk’s data. Fonseca said those models understand customer service because they were specifically trained on customer service data and provide a high degree of accuracy. “The way we see [OpenAI’s generative AI] is basically as a tool to help us accelerate things that were already in the roadmap, and make a ton of sense to add to our suite of products,” Fonseca said. As an example, she noted that Zendesk was building out its own approach to suggesting new replies for customer service agents, as well as creating content for a knowledge base about different issues. Without OpenAI, she said the draft content that Zendesk was able to generate was not quite polished. Now with the OpenAI integration, the data from Zendesk can be used to generate well-written replies and knowledge base articles. “We are trying to leverage OpenAI and large language models (LLMs) to help us perfect everything we do, on top of content,” she said. With all the power that AI brings to Zendesk and its users, Fonseca cautioned that it’s important to realize that AI can’t and shouldn’t do everything when it comes to customer experience. She noted that CX is complex and it includes workflows — and sometimes systems — that are not integrated with Zendesk. “I think the first thing we should do for our customers is help them understand what should be automated and what shouldn’t be automated,” she said. “Because if something cannot be automated, there’s no point in you trying to have a bot talk to a customer if the bot cannot add any value.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,541
2,022
"PagerDuty expands incident response capabilities to build user trust and loyalty | VentureBeat"
"https://venturebeat.com/security/pagerduty-expands-incident-response-capabilities-to-build-user-trust-and-loyalty"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages PagerDuty expands incident response capabilities to build user trust and loyalty Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Staffing shortages, distributed teams that have had minimal collaboration, high-stakes “interrupt work” disrupting IT workflows, rising tech costs prompting consolidation. This set of “colliding macro issues” demands an elevated level of incident response, As chief product development officer at PagerDuty Sean Scott put it, organizations must move beyond the idea of “ incident response ” to a more comprehensive understanding of “incident management.” “Incident response used to be all about ‘how quickly can we get back up’ when your digital operations are disrupted, but today it is much deeper than that,” he said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For this reason, PagerDuty today announced enhancements to PagerDuty Operations Cloud to help expand capabilities around incident workflows. “Consumer expectations are higher than ever: Seconds of latency can be the difference between building loyalty and losing a customer,” said Scott. “Incident management is about both reducing the risk of that outcome and keeping teams focused on rewarding work like strategic innovation, not firefighting— and especially not at 3 a.m.” Bigger mistakes, increasing demand Considering that the average cost of a data breach is now $4.35 million , the global incident and emergency management market continues to grow — by one estimate, it will total nearly $172 billion by 2026. According to KPMG , the top cyber incident response mistakes include: Untailored plans Teams unable to communicate with the right people in the right way Teams that lack skills or are wrong-sized or mismanaged Incident response tools that are “inadequate, unmanaged, untested or underutilized” Also, data pertinent to incidents isn’t readily available, the firm says, and incident response teams lack authority and visibility. And, users are often unclear of their role in the organization’s security posture. Furthermore, “there is no ‘intelligence’ in the threat intelligence provided to incident responders,” reports the firm. Thus, it’s important to integrate technology including AIops , automation and tools for site reliability engineering (SRE), said Scott. “Incident management goes into service levels that may be difficult to untangle,” he said. Automating response, standardizing runbooks For instance, a shopping cart is slow, or there is a partial outage because service APIs in a specific region are down, he said. This requires a platform that identifies operations that aren’t functioning as intended and, when the root cause is targeted, an alert is routed to the best person to resolve it. Businesses should audit telemetry (that is, how they are monitoring/ingesting signals from their digital systems), and determine a threshold for alerting the best on-call expert (who can ideally resolve the problem themselves). Organizations often have many different processes for different types of interruptions, and each use case may have different remediation “runbooks,” said Scott. These should be audited and standardized so that responders aren’t “hunting for a checklist on a wiki when a high-severity incident occurs,” he said. With automatic telemetry and diagnostics, response plays can become more sophisticated (and further automated). This could potentially enable organizations to remediate an issue before needing to alert on-call experts, he said. Just those few critical moments can mean preserving customers and saving money. “As businesses are increasing their digital maturity and enhancing incident response, they shouldn’t think of automation of this big, scary, all-or-nothing choice,” said Scott. “Get teams comfortable with it; little automations can move you closer, step-by-step, from human speed to machine speed. PagerDuty prioritizing action PagerDuty’s new Incident Workflows feature allows teams to configure response workflows for different types of incidents based on various triggers, such as changes in urgency, status and priority. It also provides a list of incident actions. For example, an event in digital infrastructure comes in for a critical extract, transform, load (ETL) job failure. An on-call responder is then notified and goes to work to diagnose and remediate that issue rated with “moderate” severity. But then, a second event comes in: A mobile app is down for the Northwest region. This is “obviously a much bigger issue than the ETL issue, and should be prioritized as such,” said Scott. PagerDuty’s new Incident Workflows feature allows teams to configure response workflows for different types of incidents based on various triggers, such as changes in urgency, status and priority. It also provides a list of incident actions. Additionally, users can automatically alert customer support and public relations teams so that they can be more proactive and deflect additional customer feedback to the mobile team. Slack channels and Zoom Bridges can also be created automatically, and an automatic diagnostic is run to gather information or telemetry. A new PagerDuty Status Page allows users to communicate real-time operational updates to specific cohorts of customers. This can be fully automated or keep humans in the loop for approval, said Scott. For instance, a communications team can approve a customer/stakeholder-facing before it is made public, while internal status pages can automatically alert the organization behind a firewall. Incident Workflows will move to early availability on November 15 and PagerDuty Status Page moves to early availability November 29. Tailoring alerts Meanwhile, flexible time windows for intelligent alert grouping lets users tailor alerts and reduce noise. Furthermore, PagerDuty’s machine learning engine calculates and recommends ideal time windows for a specific service, said Scott. He reported that a sample of PagerDuty’s early access program shows that teams using the feature see a 10 to 45% increase in average compression rate on their noisiest services in weeks. Flexible time windows are currently in early availability, and will move to general availability in late November. Finally, a new custom field on incident feature provides more contextual information on the issue and the ability to view and access information from any surface. This service will become initially available in early 2023. Scott said that the company’s existing PagerDuty Digital Operations Maturity Curve model enables customers to identify where digital operations fall (from manual/reactive to proactive and predictive). And, the company continues to share learnings and best practices from its own incident response learnings. “Regardless of how we label it, incident response/incident management is about preserving a seamless customer experience, and maintaining the trust and loyalty of customers,” said Scott. “This ultimately translates to protecting and growing revenue.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,542
2,023
"Snowflake deal appears imminent as Neeva shuts down search, pivots to enterprise LLMs | VentureBeat"
"https://venturebeat.com/ai/snowflake-deal-appears-imminent-as-neeva-shuts-down-search-pivots-to-enterprise-llms"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Snowflake deal appears imminent as Neeva shuts down search, pivots to enterprise LLMs Share on Facebook Share on X Share on LinkedIn Image by Canva Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A Snowflake deal to acquire Neeva seems imminent after Neeva, the Mountain View, CA-based search startup, once seen as a promising AI-driven challenger to Google’s search dominance, announced on Saturday it will shut down its consumer search product to focus on enterprise use cases of LLMs and search. Neeva’s announcement came just a few days after reporting from The Information that said the Montana-based Snowflake had signed a letter of intent to acquire Neeva in order to offer AI software services to enterprise customers. Snowflake’s stock jumped with the news, which was notably in advance of Snowflake’s scheduled Q1 financial results announcements this Wednesday. A Snowflake spokesperson declined to comment, while Neeva representatives did not respond to outreach from VentureBeat. Neeva exploring enterprise LLM use cases Neeva was co-founded in 2019 by former senior Google advertising tech executives, including former Google SVP of ads and commerce Sridhar Ramaswamy. Most recently, Neeva had touted its AI-powered search engine’s ability to cite its sources. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In a blog post announcing the shutdown of the consumer search product, Ramaswamy and co-founder Vivek Raghunathan said: “Over the past year, we’ve seen the clear, pressing need to use LLMs effectively, inexpensively, safely, and responsibly. Many of the techniques we have pioneered with small models, size reduction, latency reduction, and inexpensive deployment are the elements that enterprises really want, and need, today. We are actively exploring how we can apply our search and LLM expertise in these settings, and we will provide updates on the future of our work and our team in the next few weeks.” Many AI companies are shifting focus towards the enterprise Many tech leaders are looking to the valuable opportunities of enterprise LLMs. OpenAI has talked about its work to offer customizations for enterprises, while Anthropic recently partnered with Scale AI to “ bring generative AI to enterprises. ” Stability AI has said it wants to build custom models for the largest companies and governments, while Cohere’s LLMs are entirely geared towards the enterprise. Experts, including EY global chief technology officer Nicola Morini Bianzino, have been saying for months that the “ killer use case ” of generative AI could be enterprise knowledge management. “Knowledge companies tend to store knowledge in a very flat, two-dimensional way that makes it difficult to access, interact and have a dialogue with,” he told VentureBeat in January. “We tried 20, 30, 40 years ago to build expert systems. That didn’t go really well because they were too rigid. I think this technology promises to overcome a lot of issues that expert systems have.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,543
2,023
"Zuckerberg says Meta building generative AI into all its products, recommits to 'open science' | VentureBeat"
"https://venturebeat.com/ai/zuckerberg-says-meta-building-generative-ai-into-all-its-products-recommits-to-open-science"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zuckerberg says Meta building generative AI into all its products, recommits to ‘open science’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In an internal all-hands meeting this morning, Meta CEO Mark Zuckerberg said Meta is building generative AI into all of its products and reaffirmed the company’s commitment to an “open science-based approach” to AI research. The comments came just two days after Senators sent a letter to Zuckerberg questioning the leak of Meta’s popular open-source large language model (LLM) LLaMA in March (which was seen by many experts as a threat to the open source AI community). “In the last year, we’ve seen some really incredible breakthroughs — qualitative breakthroughs — on generative AI and that gives us the opportunity to now go take that technology, push it forward and build it into every single one of our products,” Zuckerberg said. “We’re going to play an important and unique role in the industry in bringing these capabilities to billions of people in new ways that other people aren’t going to do.” New Meta AI announcements Zuckerberg made a range of AI-related announcements at the meeting: First, he explained that Meta is building a range of generative text, image and video models, along with ones that can generate rich 3D content, all the way up to entire worlds in the metaverse. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! These generative AI-powered experiences are “under development in varying phases,” a Meta spokesperson said, adding that “our investments in AI continue to be foundational to our near-term and long-term success, especially as we get ready to bring our first generative AI-powered experiences into our family of apps and consumer products.” Zuckerberg also showcased LLM-powered AI agents with unique personas and skill sets that help and entertain people, and said the company will bring them to Messenger and WhatsApp first, but explore additional opportunities across its family of apps, consumer products and into the metaverse. In addition, he highlighted an Agents Playground, an experimental internal-only interface powered by LLaMA where users “can have conversations with AI agents and provide feedback to help us improve our systems.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,544
2,023
"Nvidia and Dell look to breathe new life into AI on premises with Project Helix | VentureBeat"
"https://venturebeat.com/ai/nvidia-and-dell-look-to-breathe-new-life-into-ai-on-premises-with-project-helix"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia and Dell look to breathe new life into AI on premises with Project Helix Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Dell and Nvidia are extending their long standing partnership with the new Project Helix initiative to bring the power of generative AI to on-premises enterprise deployments. Project Helix is an effort to combine hardware, software and services from the two vendors to help enterprises benefit from the emerging capabilities of large language models (LLMs) and generative AI. The initiative will include validated blueprints and reference deployments to help organizations deploy generative AI workloads. The hardware side will see Dell PowerEdge servers including the PowerEdge XE9680 and R760a benefit from Nvidia H100 tensor core GPUs. The integrated hardware stack will integrate with Dell PowerScale and Dell ECS enterprise object storage. The software stack includes Nvidia AI Enterprise as well as capabilities from the Nvidia NeMo framework for generative AI. To date, much of the work in generative AI has involved the cloud, but that’s not necessarily where all enterprises want to run workloads. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We’re bringing security and privacy to enterprise customers,” Kari Briski, VP of software product management at Nvidia told VentureBeat. “Every enterprise needs an LLM for their business, so it just makes sense to do this on premises.” Project Helix looks to enable LLMops for enterprises The reality for many enterprises is that there is no need to build a new LLM from scratch. Rather, most enterprises will customize a pre-trained foundation model to understand the organization’s data. Briski noted that she realizes that the term “ generative AI ” is a much hyped buzzword. The combination of Dell hardware with Nvidia hardware and software is also about enabling what Briski referred to as LLMops — that is, being able to operationalize an LLM for enterprise use cases. Nvidia and Dell are hardly strangers: The two vendors have been partnering on hardware solutions for years. Briski emphasized, however, that Project Helix is different from what the two companies have been collaborating on to date. “What we haven’t been doing is providing these pre-trained foundational models in a way that’s easily replicable,” she said. Benefiting from AI, no matter the deployment Briski explained that Project Helix blueprints will provide guidance to help enterprises deploy generative AI workloads that can be customized for an organization’s specific use case. She noted that it can be daunting for an organization to be able to optimize a model for latency and throughput in real time. Varun Chhabra, SVP for product marketing for the infrastructure solutions group and telecom at Dell told VentureBeat that it is critical to understand how compute, storage and networking work together to enable a real time genitive AI workload. Determining the right mix of computer resources is important, and the best practices to do so are encapsulated within the Project Helix initiative. By running generative AI on Dell hardware, Chhabra expects that organizations will be able to also benefit from AI wherever they want to deploy, be it on-premises, at the edge or in the cloud. Chhabra is particularly optimistic about the potential for Project Helix. The name Helix is a nod to the double-helix structure of DNA, which is the basic building block of sentient life on Earth. “If you think about the double helix and what it means to life, we felt it was very appropriate metaphor for what we think is happening with generative AI, in terms of not just transforming people’s lives, but more specifically what will happen within enterprises and what this will unlock for our customers” said Chhabra. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,545
2,020
"BioTech: Accelerating innovation in health care | VentureBeat"
"https://venturebeat.com/2020/11/10/biotech-accelerating-innovation-in-health-care"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Lab Insights BioTech: Accelerating innovation in health care Share on Facebook Share on X Share on LinkedIn This article is part of a Technology and Innovation Insights series paid for by Samsung. As the race to save the world from the COVID-19 pandemic barrels on, and teams around the globe work tirelessly to develop effective therapeutics and vaccines, we have all become acutely aware of the potential for new breakthroughs in the biosciences. What are we learning from this global effort and how can we accelerate the delivery of new drugs and vaccines to fight current and future diseases more effectively? The answer will likely come from advances initiated by biotech startups, aided by new data processing and artificial intelligence. In this new episode of Samsung’s “The Next Wave” interview series , Young Sohn, Samsung’s President and Chief Strategy Officer, speaks with Rafaèle Tordjman, President and Founder of Jeito Capital — an investment fund dedicated to biotechnology and biopharmaceuticals — about how to boost research in healthcare by providing long-term support to startups and technology. Accompanying entrepreneurs over the long term Based on her long career as a practitioner, researcher, and managing partner at Sofinnova Partners, which earned her the Order of the Legion of Honor, Rafaèle Tordjman made three major observations. First, French biotech startups are chronically underfunded and cannot access the same financial resources as their American counterparts. Along with that, there are still few long-term investment decisions coming from venture capital funds in Europe. And finally, entrepreneurship must be encouraged, and entrepreneurs must be supported to facilitate their development. “I founded Jeito Capital two years ago in response to these shortcomings,” notes Tordjman. “The development of a new drug, a new vaccine as we see with COVID-19, and more generally a new innovative therapy is a long and complex process which requires financing and support over several years. In France, we have a pool of researchers and entrepreneurs in biotechnology and biopharmaceuticals who need to be encouraged and monitored in order to bring about innovation and therapeutic advances.” Information technology — an accelerator of therapeutic progress New information technologies also play a key role in accelerating pharmaceutical research. However, the opposite is also true: biotechnological innovations facilitate progress in the area of data processing. So we observe a convergence between information technology and medicine. Thanks to digitization, clinical trials in particular can be greatly accelerated and conducted remotely. The processing of those results can also be done faster and more accurately, which is essential to accelerating the introduction of new drugs and/or vaccines. Likewise, new applications on smartphones and other technologies make it possible to continuously monitor individuals’ body activity and thus make proactive rather than reactive diagnoses, giving doctors the means to cure or otherwise treat certain diseases faster. On the other side, therapeutic research also paves the way for further technological innovations, especially when it comes to AI. Today’s therapeutics, which means the treatment of diseases, involves very large amounts of data, which only computer technology can sort and analyze quickly. “Artificial intelligence can have an immediate impact, particularly in dermatology, to accelerate diagnosis,” comments Tordjman. Strengthening gender diversity — a real boost for innovation But beyond investment and IT, it is the diversity of individuals and talents — and collaboration within teams — that fuels and advances innovation in healthcare. The diversity of individuals means, first of all, strengthening the place of women in teams that are still overwhelmingly male. “Eighty percent of health decisions within families are made by women, who have historically always played an important role in this regard. However, only 5% of entrepreneurs in therapeutic research are women,” emphasizes Tordjman. “I want to change this. That’s the reason I founded the ‘Women Innovating Together in Healthcare’ (WITH) association over ten years ago. This association aims to help working women to boost their careers, but also to encourage younger women to become entrepreneurs, to promote their research and their projects in all health-related sectors,” says Rafaèle Tordjman. Diversity plays a major role when it comes to accelerating innovation, be it in healthcare or in other industries. This is why men must be encouraged to integrate more women into their organizations, just as women must be supported and encouraged to value their differences, and have confidence in their abilities to move forward. But this diversity applies not only to gender, but also to races and cultures. “For entrepreneurs, relying on men and women with different talents and cultures is a real asset. They shouldn’t be afraid of colleagues who think differently,” concludes Tordjman. “They should rather take this into account, and think and work within an ecosystem that integrates different disciplines.” Catch up on all the episodes of The Next Wave including conversations with VMWare CEO Pat Gelsinger, the CRO & CMO of Factory Berlin, the CEO of Solarisbank, the CEO of Axel Springer, and the CEO of wefox. VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,546
2,021
"The 2021 gaming landscape: What developers, publishers, and marketers need to know | VentureBeat"
"https://venturebeat.com/2021/01/19/the-2021-gaming-landscape-what-developers-publishers-and-marketers-need-to-know"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Lab Insights The 2021 gaming landscape: What developers, publishers, and marketers need to know Share on Facebook Share on X Share on LinkedIn T his article is part of a Gaming Insights series paid for by Facebook. The gaming world was transformed when the COVID-19 outbreak brought an unprecedented surge in consumer demand. More people than ever before were playing games, both new players coming into the fold and more engaged interest from long-term gamers. The new gaming landscape also brings big changes for developers, publishers, and marketers who need to stay on top of how the gaming business has been impacted, how monetization strategies need to evolve , and what user acquisition and engagement looks like now. To help you navigate the year ahead, here’s a look at how player behavior — including motivations, preferences, and habits — has transformed ; why community, especially in the current landscape, is becoming more important; and in a world that’s becoming increasingly platform agnostic, how gaming companies can meet players where they are. Dig deeper: Check out the full report to see all the stats! VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,547
2,023
"Remove your ETL bottleneck and let analytics flow | VentureBeat"
"https://venturebeat.com/remove-your-etl-bottleneck-and-let-analytics-flow"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Remove your ETL bottleneck and let analytics flow Take business intelligence to the next level by converging analytics and accelerating time to insight There is no doubt that the digital transformation age is here. Thanks to trends accelerated by the pandemic, every company is now a digital company. The question is which ones are riding this transformational wave to achieve new levels of success and which are lagging behind. According to TechNative, 70% of companies presently aren’t hitting their business objectives , and less than a third believe they have a cogent and well-articulated data strategy to get there. Unless you can use data to generate business intelligence and make mission-critical decisions, you’re stuck at the entrance to digital transformation. It’s notoriously difficult to incorporate data into a business’ decision-making processes. At the core, the challenge is moving from a descriptive and diagnostic paradigm to a predictive and prescriptive one. This is the difference between storing data – and even having access to it – and being able to use data to fuel your decision-making now, instead of after the fact. In other words, having data is great, but unless you can use it to generate business intelligence and make mission-critical decisions from it, you’re stuck at the entrance to digital transformation. Those gates will eventually close. To avoid being shut out, you need the right analytics platform and component tools to generate and act on business intelligence in near real-time. Moving beyond descriptive to predictive To take the next step towards predictive and prescriptive analytics, you have to understand business intelligence on a curve. You can think of the flow of information on a line from hindsight to insight to foresight. Hindsight tells you what happened; insight tells you why something happened; and foresight tells you what will happen or what you can do to make something happen. To gain the full breadth of real-time insight, you have to master all three. And you should begin the process by identifying the questions you have and then looking for the right data and analytics that will answer those questions in the way you need. Business intelligence delivers insights all along this curve, including descriptive analysis that tells you what happened. The type of data you need for descriptive analysis includes things like sales numbers, website traffic, and customer feedback. Then, when you add an analytics layer, you’re able to use diagnostic analysis to understand why those things happened. Understanding the “why” reflects maturity in your business intelligence. The next step up on the curve is predictive analytics to tell you what’s going to happen in the future, and then the final step is prescriptive analytics that help you know what to do to make that future happen. Both require machine learning and artificial intelligence in combination with historical data to answer questions about things like demand forecasts, equipment failure rates, and consumer behavior. As you glean insights closer to real time, you can begin to deliver recommendations to customers just as quickly. Case study: Intelligent manufacturing It’s easier to understand the need for various types of analytics and insights in context. Say you’re a large intelligent manufacturing enterprise that makes hundreds of products, like medical and electrical equipment, under a variety of brands. There are billions of dollars in revenue at stake. You need real-time insights into how well your IoT devices and other equipment are functioning. The primary business challenges you need to solve for include ensuring minimum downtime for your machinery, forecasting the raw materials you need, and forecasting customer demand for the products you make. To meet those challenges, you need real-time insights into how well your IoT devices and other equipment are functioning. If their performance is declining, or they’re malfunctioning, that’s going to impact not just hardware lifecycle management but also your overall operational efficiency. By proactively addressing any issues, you can ensure production quality and eliminate costly delays like unplanned downtime. Those insights drive predictive maintenance. Connected devices produce data. When you collect, store, and analyze that data, you can drill down to any core problems the machines are having. Then, by applying machine learning, you can have notifications about needed maintenance sent to your team, so they can jump in quickly and fix the problems. By proactively addressing any issues, you can ensure production quality and eliminate costly delays like unplanned downtime. You’ll also be increasing the life of your equipment, which saves costs in the long run. From the supply chain side of the operation, you have to be able to act on data quickly. If operational data like orders, shipments, and transactions isn’t in a usable, accessible format rapidly, it’s not of much analytical use. Supply chains are typically linear, in the sense that each step in the process happens serially, and that often results in totally reactive responses to disruptions. But in a networked supply chain in which all parts are connected through a digital platform, issues across any point – be it logistics and transportation, manufacturing, assembly, or product consumers – can be flagged and ameliorated at any time. That lets you understand the complexity of customers’ demands and be savvy with planning for resources, leading to less waste, reining in inventory and delivery costs, and still ensuring customer retention. Still, a digital platform is only useful if clean data can be analyzed to provide insights. For many organizations, there may be a bottleneck in the ETL (extract, transform, load) process that hampers speed. There must be a solution in place to solve that pain point. The next step up on the curve is predictive analytics to tell you what’s going to happen in the future, and then the final step is prescriptive analytics that help you know what to do to make that future happen. Both require machine learning and artificial intelligence in combination with historical data to answer questions about things like demand forecasts, equipment failure rates, and consumer behavior. As you glean insights closer to real time, you can begin to deliver recommendations to customers just as quickly. Case study: Intelligent retail Managing a large retail organization has become a greater challenge than ever because of how revenue comes from both brick-and-mortar locations and online sales. Marketing has accordingly become more eclectic, comprising not just traditional ad opportunities like TV spots, billboard, and display ads, but also a savvy and engaging social media strategy and email marketing campaign. In addition to the logistical complication of managing inventory and forecasting demand for in-person and online sales, you have to think hard about how to create personalized experiences for your customer and find opportunities for cross-selling and upselling. You need supply chain optimization and real-time personalization. A slow ETL process makes adjusting to these insights quickly all but impossible. To personalize those experiences, you need a 360-degree view of your customers, linking multiple accounts from a variety of systems to understand the customer persona. From there you can be more targeted in your marketing and serve up recommendations that will appeal to your buyers. From those actions, you can apply machine learning to uncover patterns to predict end users’ behaviors and monetize them. For example, perhaps your customers are doing their browsing online but are coming into the store that same day to make the actual purchase. That’s a fairly nuanced pattern of behavior that will affect the way you think about the individual customer experience. In order to optimize the supply chain for an intelligent retail operation, you need to increase efficiency, reduce costs, and ensure the best performance. Start with a complete, unified view of your business that lets you turn operational data like point-of-sale transactions, purchase, and social media sentiment analysis into actionable insights. That will allow you to be smart about capital investments and improve business operations in areas like managing in-store inventory. As with the prior example, a slow ETL process makes adjusting to these insights quickly all but impossible. Getting ahead with Azure Synapse Link for Cosmos DB Practically speaking, then, how do you remove those roadblocks and achieve those outcomes? You have to remove barriers between operational databases and analytics databases, and they need to sync with business applications. To start, you need a modernized data environment that pulls information from multiple sources and joins systems together to establish a single source of truth. Then, you need processes to prepare that data to be analytics-ready, as well as the database, applications, and analytics to glean actionable data insights. To prepare for analysis, you have to remove barriers between operational databases and analytics databases, and they need to sync with business applications. And you need the pipelines to get the analytics to their destination. Standing in the way of all of that is an ungainly ETL process. Microsoft’s Azure Synapse Link for Azure Cosmos DB, a cloud-native hybrid transactional and analytical processing (HTAP) capability, is the key that unlocks all of it. It obviates the need for the ETL process by creating a seamless integration between the databases and Azure Synapse Analytics. Microsoft offers products and services for every part of the business intelligence systems, including Cosmos DB, Azure Synapse Analytics, Synapse Spark, and Power BI. Azure Synapse Link stands in the middle of all of it. There’s no need for data integration pipelines, and latency is <90s to deliver insights quickly without impacting performance. Operational data is stored in a transactional store and auto syncs with an analytical store. Through Azure Synapse Link, it hooks into Azure Synapse Analytics – databases like Apache Spark and SQL – and then via machine learning, big data analytics, and BI dashboards, you get all the insights you need. Setup is simple. Turn on Azure Synapse Link for new Azure Cosmos DB containers and create a Synapse workspace and connect it with Cosmos DB to grab near real-time data analysis. Among the many component parts of a modernized business intelligence platform, Azure Synapse Link plays a crucial central role: It removes the ETL bottleneck, allowing all parts of the system to flow, and in that way it becomes a catalyst for achieving near real-time business insights. Learn more about the benefits and use cases of Azure Synapse Link for Cosmos DB VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,548
2,020
"The API revolution that’s securing the future of virtual health care | VentureBeat"
"https://venturebeat.com/the-api-revolution-thats-securing-the-future-of-virtual-health-care"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The API revolution that’s securing the future of virtual health care When the pandemic hit, the use of telehealth, which until then had been only lightly used across the health care ecosystem, soared. The CDC found that the number of telehealth visits increased by 50% during the first quarter of 2020 , compared with the same period in 2019. In March 2020 alone, telehealth saw a 154% increase in visits compared to the previous year. This boom has revealed the overwhelming need for industry-standard APIs , software solutions, and hardware bundles to help telehealth platform vendors and health care providers rapidly deploy virtual care services. “The pandemic created the opportunity for this mode of care to gain traction and use,” says Gautam M. Shah, vice president of Platform and Marketplace at Change Healthcare. “And the expansion in the ability of providers to offer telehealth appointments was a systemic change, forcing providers to rethink how they operate.” “The pandemic forced us to take a hard look at all the places where, in our health care journey, the gaps existed.” That includes the underlying availability of the technology that enables telehealth; regulatory restrictions being lifted in emergency measures with the passage of the CARES Act; recognition from provider organizations that this was a better way to serve the health care consumer; and, not surprisingly, increased demand by patients. “The pandemic forced us to take a hard look at all the places where, in our health care journey — as patients, as providers, as insurers — the gaps existed,” Shah says. “Telehealth is a shining example of how we rapidly embraced a virtual-care model through legislation, through technology, and through connectivity. With recent moves by Amazon, Walmart, and United Healthcare Group in this space, it looks like telehealth will continue to grow.” And this expansion will require breaking down long-standing information silos — silos that have locked up patient data in scores of places from the hospital, to primary care offices to specialist practices, diagnostic facilities, and more. APIs: The foundation of health care virtualization Creating rock-solid virtual care includes the critical need for data interconnectivity to provide an easy-to-navigate virtual-care experience that results in a positive outcome for the patient, the doctor, and the insurer. “Data sharing and information exchange isn’t just critical to virtual health care, it is the bedrock,” Shah says. “You can’t have modern, consumer-friendly health care experiences without data interoperability and data flow.” Application programming interfaces, or APIs, are the intelligent integrations that enable all of this functionality at scale. APIs connect disparate systems to enable seamless data sharing and communication. That’s key to keeping pace in the digital era: 90% of health care organizations see APIs as mission critical or very important , a Change Healthcare API study revealed, demonstrating that APIs are emerging as the backbone of the digital health economy and will tip the balance when used at scale by 2023. “If data is the bedrock that we build this house on, the APIs are the foundation,” Shah says. “The APIs are how we make these experiences work, how we share data, how we provide connectivity to all the different systems in place.” “If data is the bedrock that we build this house on, the APIs are the foundation.” That requires thinking of APIs as products that solve problems versus merely serving niche data-transfer functions, so developers can leverage their capabilities and patterns to more quickly create purpose-built, scalable, multifunctional solutions to manage specific parts of the workflows. Productizing APIs ensures that the data is secure and private, as well as performant, which are among the chief concerns of the health care industry. “Using APIs enables health care startups, hospitals, health plans, and other health care-focused businesses to more rapidly develop innovative products and services while meeting regulatory requirements and advancing the health care industry’s adoption of modern, open interoperability standards,” Shah says. The use of open standards means that APIs and services can be readily deployed across the entire health care ecosystem, internally and externally, to help reduce the time and cost associated with many health-information technology deployments. Over time, Change Healthcare expects to have hundreds of open, standards-based APIs available that represent the company’s comprehensive portfolio of health care IT solutions, as well as API-based solutions from its partners. This will help solve not only long-standing challenges but ready the industry for the future of health care. The benefits of APIs to patients and providers APIs are behind the scenes and invisible to the patient — but the enormous benefits to patients are front and center. What matters is the way APIs enable providers to have the most up-to-date clinical and insurance information on a patient, allowing them to offer the most appropriate care and ensure the claim and reimbursements are timely and accurate. “One of the biggest dissatisfiers in a health care experience is how confusing and redundant it is for patients.” But it also means patients have access to their own data, such as immunization records, as well as the ability to schedule appointments or pay bills in seamless, mobile-friendly ways. “One of the biggest dissatisfiers in a health care experience is how confusing and redundant it is for patients: refilling out forms, having to redo tests, coordinating between the care provider and the insurance company,” Shah points out. “It’s especially painful when you have a chronic condition, like more than half of Americans do.” Having all that information available means patients have experiences that are driven by their health history, allowing them to transit the system most effectively, especially as they age. Another big benefit is efficiency. The United States health care system has an access problem, among others. On average, it takes 45 days to get an appointment with a specialist, and then that appointment is filled with forms and reviewing patient history, Shah says. APIs remove the administrative burden and reduce the friction of the care process, which leads to better access and ultimately, better care. How payers benefit from APIs When payers become more efficient and when they have better data and use it for better analytics, the payment process works faster, more accurately, and more efficiently. It’s ensuring that money can flow in the system in an efficient manner so all members of the health care ecosystem spend less time dealing with operational activities and more time dealing with care. “It’s critical, and that’s where the APIs help. They create that data flow, which impacts the ability to deliver great care for the providers and the patients, and they make it more efficient for the payers,” he says. Back-office processes, like checking eligibility, processing claims, and providing payments are relatively automated processes behind the scenes, but they’re disconnected. Tying them together with health care data and APIs creates a flow of financial transactions that allows the back office to operate more efficiently and process claims in the right way. It simplifies and automates processes and improves transparency. The payer can determine more quickly what a patient owes, and the patient has that information up front. “From the payer perspective, that access to data, the ability to do analytics on the data, on the population, and understand how best to tune their models, is critical.” The inherent complexity of that process grows as the number of doctors, health care groups, hospitals, and health systems grow, from primary doctors to specialists,” Shah says. “If you can create efficiency at every one of those steps, that process grows with scale.” On the regulatory side, the CMS Interoperability and Patient Access Rule requires payers to give members control over their information, which is an unprecedented opportunity for innovation, Shah says, especially for third-party digital health care developers. They can make their payment operations or reimbursement operations more efficient and offer better experiences for health care consumers, with more transparency and better ways to manage health care, optimize spend, and more. “From the payer perspective, that access to data, the ability to do analytics on the data, on the population, and understand how best to tune their models, is critical,” he says. “And as more people go to high-deductible health plans, it’s a management benefit for members, who can understand their payment benefits more easily and figure out how best to use health care benefits.” Implementing and scaling APIs securely APIs are becoming more established in health care, and any developer should be able to easily and quickly use them, Shah says. However, solving challenges related to data transfer and solution development is unquestionably not an efficient way to use your time when these have already been solved, particularly if you think of APIs as products that provide solutions. Rather than merely serving niche data-transfer functions, developers can leverage productized API capabilities and patterns to more quickly create scalable, multifunctional solutions. “A developer wants to pick up and use something that already exists; that offers the functionality, connectivity, and operability they need, so they can spend their time building a great experience, the next product, and so on,” Shah says. “That’s the approach Change Healthcare has taken, building secure, private, and performant APIs as products that a developer can plug into their product and start using right away.” That’s why the company offers a destination for developers, the API & Services Connection TM for Healthcare. Digital health companies can visit the portal to select APIs, use the sandbox to test out their APIs and their functionality, and click the “Buy” button to efficiently check out. It also offers comprehensive resources for developers, including documentation, implementation guides, a community, the sandbox, and more. APIs are a great democratization of the capability to build out virtual health care solutions for companies of every size. The API & Services Connection currently has more than 70 API products that cover the care journey from pre-care, during care, post-care, and continuity of care. It allows developers to engage in those transactions in a simple, standards-based way, from a company with more than 2,400 payer connections, and which performs 15 billion health care transactions a year. The API products are built to common standards to promote security, interoperability, and scalability. Change Healthcare uses this API & Services Connection internally, which is critical, Shah says. “When developers come to Change Healthcare, they’re using the same APIs that we’re using to develop our products,” he says. “We’re a $3.5 billion health care technology company. You’re taking advantage of the products that we’re developing; that we’re using to deliver that much value to the health care ecosystem in that API that you can just pick up and use.” The APIs are a great democratization of the capability to build out virtual health care solutions for companies of every size. Change Healthcare has customers ranging from the biggest payers and health systems in the United States to entrepreneurs, all of whom are using this API technology to innovate ways to provide care and solve problems at the point of care. The future of health care Data interoperability is the cornerstone of future innovation, Shah says, and regulations have created the baseline. As we proceed, because hospitals, health systems, and payers are putting the APIs into their environment, he expects to see greater liquidity of data. From a patient/member perspective, that means greater access to our health care data. With increased access, consumers will be able to carry that data to share with providers wherever and whenever necessary. Greater liquidity of data also means that the general efficiency of care will start to increase because it will be easier to use the interoperable data underneath, and the APIs, to create seamless care experiences. An example of that will be in the transitions of care, like admissions, discharges, and transfers. Those are the seams in the experience. They’re where data, people, and information traditionally get lost. A pool of accessible data connected together with APIs and workflows covers those seams. “That’s an amazing opportunity to innovate, because now that we have access to the data, whether it’s a provider organization or a digital health company, you can start to create the experiences that smooth the discharge-and-transfer part of that transition,” he explains. Some of that will be driven by new regulations, but data and APIs are what raise the health care experience to new levels, because it will become smoother. This will also help address areas of health care previously less served by the digital health ecosystem, including things such as maternal health and behavioral health, which has become an even greater crisis over the course of the pandemic. Providing innovative ways to engage with and support the behavioral health of the population is critical, says Shah. It also includes health equity, or broadening of access to care for people across the country, no matter where they live or what kind of access they have (or don’t have) to technology. “A big part of the work I do every day, bringing the APIs and data out into the world, is simplifying how we operate in the system to bring more people into the fold,” Shah says. “That’s why comprehensiveness, security, scale, performance, and privacy are so critical, because when you get into the mass of the population, if you don’t have solutions that operate in that way, with that comprehensiveness and trust we’ve built into our products, it’s going to be hard to get to that scale.” Explore API solutions for telehealth, eligibility and claims, billing, and interoperability. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,549
2,023
"Using machine learning to tackle the world’s biggest problems | VentureBeat"
"https://venturebeat.com/using-machine-learning-to-tackle-the-worlds-biggest-problems"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Presented by Using machine learning to tackle the world’s biggest problems Machine learning has graduated from the realm of science fiction to become a core, transformative technology for organizations across industries and categories. The unique potential and power of machine learning is sparking genuine innovation, powering the ideas that are improving lives and protecting our planet right now. With machine learning, organizations are making inroads toward ending the pandemic, protecting and supporting our veterans, finding homes for the homeless, understanding climate change, and more. But this is just the beginning. “The technology is ripe, and it now has the ability to provide new and significant solutions for some of the world’s biggest issues,” says Michelle Lee, vice president of the Amazon Machine Learning Solutions Lab. Thousands of organizations are using machine learning — from tracking disease outbreaks worldwide, finding new ways to treat cancer, and more Tens of thousands of companies and organizations worldwide have turned to Amazon Web Services (AWS) for machine learning – from BlueDot, who is tracking disease outbreaks worldwide, to the Fred Hutchinson Cancer Research Center, finding new ways to treat cancer, and Mantle Labs, a startup using machine learning to offer farmers cutting-edge crop monitoring, and more. However, access to machine learning, a new technology to so many of these organizations, can often come with a skills and technology deficit. That’s where AWS steps in, partnering with innovators to bridge the gap and bring pioneering solutions that tackle our most urgent and important challenges. Helping companies develop groundbreaking machine learning solutions The Amazon ML Solutions Lab pairs an organization with machine learning experts to help identify and build machine learning solutions that tackle the issues at the heart of the customer’s mission and purpose. Whether you are a large global enterprise, a startup, or a non-profit, the ML Solutions Lab brings over 20 years of Amazon ML innovations to help organizations get started with machine learning. “A lot of these customers are addressing new opportunities where they’re looking at more exciting and more efficient ways of doing their business or going after research problems that were previously untenable,” Sri Elaprolu, senior manager in the ML Solutions Lab explains. In designing a machine learning solution, Elaprolu says it’s essential to consider the capabilities and skill set that a particular organization has in-house to operate the solution long-term – and not hand off something that is beyond the abilities of the organization to maintain. “As part of the Lab we have a global mission,” Elaprolu continues. “What we’re doing is impactful in the real world for everyday people, and the results are extremely compelling. Applying technology to solve real-world problems that have a meaningful impact on humans and life is absolutely thrilling.” Let’s take an in-depth look at some of the most important work being done today. RallyPoint: Enabling faster suicide intervention among veterans 0 veterans die by suicide each year Since 2012, RallyPoint , a social media platform designed for the broader U.S. military community, has provided an online user experience focused on military service members, veterans, families, caregivers, and survivors to help them lead more successful and fulfilling lives. Among the millions of public discussions on the platform, a small percentage come from members who share thoughts and behaviors about self-harm. The Department of Veterans Affairs estimates that approximately 17 military veterans die by suicide each day – and RallyPoint has made it a priority to offer critical mental health resources and support to these men and women when they need it. Developing a way to quickly and accurately sift through these high-risk public posts created by a small minority of RallyPoint users is a challenge. In order to speed discovery of these at-risk public posts, RallyPoint turned to the ML Solutions Lab. The Lab worked closely with RallyPoint to develop a machine learning model that can quickly analyze public posts on the RallyPoint platform and help determine whether there is an indication of self-harm. With the help of this machine learning model, RallyPoint has been able to successfully flag concerning posts quickly and accurately while reducing the amount of manual review needed to enable a potentially life-saving intervention. Behind the solution The team at the ML Solutions Lab also worked closely with RallyPoint and mental health experts at Harvard University’s Nock Lab to tackle this challenge. First, the Amazon ML Solutions Lab collaborated with RallyPoint to build a machine learning model using Amazon SageMaker and anonymized public posts provided by RallyPoint. Then, mental health experts at Harvard helped train the model by annotating additional posts using Amazon SageMaker Ground Truth in order to continuously improve the accuracy of the predication made by the model. Ongoing, RallyPoint and Harvard will continue to further refine the model while evaluating the best content (e.g., mental health programs, hotlines, support groups) and preferred method to surface information to users. In the long term, the goal of the solution will be to augment the community engagement by RallyPoint member administrators that takes place on the platform today when there is self-injurious content. “We are encouraged by the early results – and how the technology is contributing to tackle this challenge,” Lee says. “It is our privilege to support the military community in this work.” CORD-19 Search: Making sense 
of Covid-19 research As of July 2020, COVID-19 has infected more than 17 million people worldwide, and more than 674,000 have died. Since the virus was first identified in late 2019, a huge amount of cutting-edge research on ways to fight COVID-19 has been published, with more appearing every day – coming so fast that researchers can’t keep up with it. Faced with an exponentially increasing volume of information, world researchers are finding it difficult to derive insights that can inform treatment and prevention. To help combat the problem, the Amazon ML Solutions Lab worked with teams across AWS, to build and launch CORD-19 Search , a new search website powered by machine learning that can help researchers quickly and easily search for research papers and documents. How CORD-19 Search works CORD-19 Search was built on the Allen Institute for AI’s COVID-19 open research data set of more than 130,000 research papers and other materials. This machine learning solution uses Amazon Comprehend Medical to extract relevant medical information from unstructured text, including disease, treatment, and timeline. The information is then indexed in Amazon Kendra, an enterprise search service with natural-language query capabilities that make it easier to find and rank related articles. Data set of 0 research papers and other materials The platform returns the most relevant articles corresponding to a researcher’s question, along with other related materials that may be of interest. The research can now be focused in a much more narrow, relevant space, versus having to sift through thousands of results. To help researchers find and visualize insightful relationships between scientific articles, the team introduced a knowledge graph. The COVID-19 knowledge graph incorporates articles, authors, institution affiliations, citations, and biomedical entity relationships; the resulting graph contains over 336K entities and 3.3M relationships. Perhaps most helpful to researchers, the knowledge graph powers a recommendation engine, surfacing the most highly relevant articles based on a user’s search query and even browsing history. And the full knowledge graph has been made publicly available to researchers through the AWS COVID-19 Data Lake to enable future insights and discovery. “AWS’s long-term vision is to expand the CORD-19 Search architecture to incorporate even more data resources than what we’ve already incorporated,” Lee says. “This will allow researchers to uncover patterns of disease progression, make data-driven decisions, and help improve patient outcomes in the effort to unlock data related to COVID-19.” Investing in AI and machine learning for societal change AWS knows that the next brilliant innovations may just now be percolating in the minds of brilliant, civic-minded entrepreneurs and engineers who need assistance to bring their ideas to life. They’ve invested in developing resources to support those looking to use machine learning in this way, beyond the Amazon ML Solutions Lab. These include programs like the AWS Imagine Grant Program and the Amazon Research Awards. AWS Imagine Grant Program: Helping PATH change lives in L.A. The AWS Imagine Grant is awarded to non-profits and non-governmental organizations that are using powerful technology to solve some of the world’s toughest challenges. It provides grant winners financial and operational support including AWS Promotional Credits, training services, marketing support, and more. Recipients are finding cures for childhood cancer. Stopping illegal fuel dumping in our oceans. Sharing knowledge and culture. Giving unbanked populations a financial voice. Guiding women facing life-threatening breast cancer diagnoses. Helping veterans access the support they’ve earned. Or, like PATH – another recipient of the AWS Imagine Grant Program – using machine learning to address homelessness. For someone living on the street or in a homeless shelter in Los Angeles, the wait to get housing through the county’s Coordinated Entry program usually takes months. Over 0 homeless matched with housing using machine learning PATH, an L.A.-based organization founded to address the ever-increasing issue of homelessness, applied for an AWS Imagine Grant to develop a way to shorten that wait dramatically. With the grant and support from the AWS team, the organization developed LeaseUp to connect clients in real time with the best possible housing for their needs. Amazon Personalize captures relevant information about available units of housing so case managers can recommend the best housing option to their clients in real time. By integrating this technology, the organization has been able to match over 600 individuals experiencing homelessness with housing – and reduce the time it takes to do so. Timing in these situations is often critical; a person who is ready to come in and get help one day may not return the next. LeaseUp aims to add 2,000 new units to its database over the next year to help even more people make it home. Bringing more existing apartments onto the platform, as well as working more seamlessly with the landlords to list rental units, are important steps in not just addressing homelessness but in ending it. Amazon Research Awards: Helping Oxford unlock key mysteries of climate change The Amazon Research Awards are dedicated to creating the future with scientists around the globe – and a key component of that mission is helping academics advance the frontiers of machine learning. Providing access to the latest compute, storage, and networking is key to lay the groundwork for PhD candidates and graduate students to further their research. Now, recipients of the award are using machine learning and its applications across a wide range of problems, from finding new therapies for cancer to solving climate change and exploring outer space. The awards provide eligible researchers and university programs with cash awards and AWS Promotional Credits so that they can do more, more quickly, using the most advanced compute, analytics, and machine learning tools available in the cloud. Climate scientists at Oxford are working to unearth new ways to combat climate change, as a recipient of the Amazon Research Awards. Machine learning is an essential tool for climate change research since climate science is such a data-intensive field. Climate models are enormous, requiring supercomputers to run them, and analysis requires a huge amount of earth observation data. As data continues to grow, along with complexity, it becomes impossible to explore all avenues of research manually. This is one of many ways AWS and Oxford continue to work together, including a recently announced collaboration to fund a testbed of new research in AI and data science across the university. Oxford is studying the effects of aerosol pollution on clouds to break new ground in global warming research In the Climate Processes group in the Department of Physics at Oxford, the hope is that by studying the effects of aerosol pollution on clouds they’ll be able to break new ground in global warming research, leveraging tools like Amazon Deep Learning AMIs running on EC2. Clouds reflect sunlight back to space, acting like an umbrella that cools the earth. Hence, even small changes in clouds in response to global warming or air pollution could have a big impact on environmental health and serve to accelerate or dampen the greenhouse gas effect. Machine learning models can track these changes to understand why clouds change, which could be the key to addressing global warming. Now Oxford scientists will be able to analyze satellite data covering the entire earth multiple times a day, providing countless images of aerosol-impacted clouds, which they’re able to process in the AWS cloud, thanks to the grant from AWS. In September, 15 PhD students across Europe will start working with teams at Amazon to train on the machine learning tools that will help quantify these effects, and understand their dependence on cloud type, which regions they form in, where they are, and how prevalent they are. “Such scalable machine learning techniques allow us to make rapid progress in an area where researchers previously spend months of their time on identifying features in fairly limited datasets manually,” says Philip Stier, a professor of atmospheric physics. The future of AI and machine learning to benefit society The greatest transformational impact comes when we bring together technology experts with subject matter expertise “We’re working in a field that is fast-emerging, alongside a team of highly accomplished and experienced scientists and pushing the boundary on a daily basis,” says Elaprolu. “A lot of the problems that our team tackles have not been dealt with previously. We’ve seen the power of machine learning when applied, and how transformative it can be.” Organizations are constantly working on innovative techniques to solve the most important issues the world is facing today, making profound and significant impact across the world. And the Machine Learning Solutions Lab is there, bringing its technical skill and expertise to its customers, helping them pursue their world-changing goals. “The greatest transformational impact comes when we bring together technology experts, such as those we have at AWS, with our customers’ subject matter expertise,” Lee says. “When you combine those two, we have the potential to create powerful change to build for a better today.” Learn more about how machine learning is being used to tackle today’s biggest social, humanitarian and environmental challenges. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,550
2,023
"Teradata deepens Dataiku integration to accelerate enterprise AI projects | VentureBeat"
"https://venturebeat.com/ai/teradata-deepens-dataiku-integration-to-accelerate-enterprise-ai-projects"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Teradata deepens Dataiku integration to accelerate enterprise AI projects Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, multicloud data giant Teradata announced it is expanding its integration with AI startup Dataiku to enable enterprise users to import and operationalize their Dataiku-trained AI models within the Teradata Vantage platform. The move, Teradata claims, will help companies move past deployment complexities and accelerate their AI projects from pilot to production, at scale. This is crucial as AI projects often end up in the proof-of-concept graveyard , and if they do make it to deployment, they are delayed by months due to operationalizing roadblocks. The expanded integration capabilities are available starting today, both companies said. Teradata-Dataiku team up for AI With Vantage, Teradata offers enterprises a modern analytics platform that combines open source and commercial analytic technologies to operationalize insights from data and enable descriptive, predictive and prescriptive analytics. Dataiku, on the other hand, gives a central working environment to experiment with data and train, deploy and manage AI applications. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While each platform has its own domain, they share synergies through the Teradata plugin for Dataiku. The integration allows their joint customers to access and execute certain analytic functions that reside in Teradata Vantage within Dataiku. This way, users of Dataiku could easily tie Vantage analytic functions, like data preparation, into their data science and AI project workflows. Now, Teradata and Dataiku are deepening this engagement by expanding the support for all Vantage analytic functions, including data cleansing, feature engineering , machine learning (ML), time series and digital signal processing. More importantly, the integration also now supports high-performance processing of Dataiku-developed ML models within Teradata Vantage. Previously, the models from the platform had to be converted to a common interchange format such as PMML (predictive model markup language). Now, the models can be imported in Dataiku’s own native model format, reducing the steps in the ML prediction pipeline and removing potential model conversion complexities. This can help teams accelerate their AI projects to production. “In general, data scientists will perform preparation, cleansing and transformation of their Vantage data through Dataiku workflows using the Teradata plugins … Analytic ML models are then trained with Dataiku ML algorithms using this training data from Vantage,” Hillary Ashton, chief product officer at Teradata, explained while speaking with VentureBeat. “These models can then be exported to Vantage in native Dataiku format for at-scale inference/scoring using ClearScape Analytics’ BYOM (bring your own model) functionality. This process can be iterated until a final model is achieved, with the Dataiku trained model productionized using BYOM model scoring in Vantage workflows,” Available right away Ashton said the enhanced capabilities are now live and multiple joint customers are already using them. She did not share specific outcomes seen so far, but said the company would be “happy to share results once this work is complete.” Teradata’s Q1 2023 recurring revenue grew 4% in constant currency, which contributed to generating more than $300 million in gross profit and over $100 million in free cash flow. In December 2022, Gartner named the company a leader in its Magic Quadrant for cloud database management systems, citing price predictability and financial governance as key strengths. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,551
2,022
"The number of tech unicorns fell 40% in 2022 | Global Startup Ecosystem report | VentureBeat"
"https://venturebeat.com/ai/the-number-of-tech-unicorns-fell-40-in-2022-global-startup-ecosystem-report"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The number of tech unicorns fell 40% in 2022 | Global Startup Ecosystem report Share on Facebook Share on X Share on LinkedIn The Global Startup Ecosystem report has mixed signals. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. As venture capital investments fell during the economic downturn, the number of tech unicorns fell 40% in 2022, according to the Global Startup Ecosystem report. Last year, we saw a slowdown in the number of unicorns, with a global decline of 40% from 2021’s 595 to 359. However, seven ecosystems still produced their first tech unicorn in 2022, according to the report by Startup Genome and the Global Entrepreneurship Network (GEN). The report said that a recession is a good time to invest in startups — high interest rates can benefit startups, concentrating capital and talent into ventures that create value. Startups funded during the Great Recession had slightly higher exit multiples over total money invested than those funded during economic expansions. “Despite current economic challenges, we are confident that, equipped with the right knowledge, entrepreneurs, policymakers, and community leaders everywhere can leverage opportunities to come together and show how innovative technologies can not only continue to drive growth and job creation but simultaneously help save the planet and ensure a better future for everyone,” said JF Gauthier, CEO of Startup Genome, in a statement. “This essential mission cannot be put on hold while we wait out rocky economic times.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! VC weakening VC funding globally began its downward trend in the first quarter of 2022, dropping 13% from Q4 2021. Overall, 2022 declined by 35% from 2021, the report said. Although fewer startups were funded in 2022 globally, they received larger sums: there was an 18% decline in the number of deals, but a 17% decline in deal amount, meaning that the average deal size grew 2%. The biggest tech exit of the year was Miami-based MSP Recovery’s $32.6 billion initial public offering (IPO), which pales in comparison to 2021’s biggest exit, Beijing-based Kuaishou’s $150 billion IPO, which was nearly five times larger. Reflecting AI’s increasing use and intersection with other sub-sectors, AI and Big Data was the sub-sector with the highest count of total VC deals in 2022, making up 28% of the global share. It also has the highest growth in number of exits, at 74%, from 2017–2018 to 2021–2022. As Deep Tech innovations become more integrated into the startup world, its exit amount grew by 326% from 2017–2018 to 2021–2022, faster than non-Deep Tech technologies, which grew 225%. Regional VC investments Overall VC funding in Asia dropped by 31% from 2021, from $102 billion to $70 billion. However, Asia was the least impacted global region in terms of early-stage funding amount, dropping just a single percentage point from 2021 to 2022. In 2022, the amount of early-stage funding in Europe was down 15% from 2021, but the average early-stage deal amount grew by 7% due to a significant reduction in the number of early-stage deals, just 75% of 2021’s number. Latin America declined 72% in Series B+ funding amount from 2021 to 2022, while deal count declined 54%. From 2018–2022, Latin America experienced a 65% increase in Series B+ deal count and a 143% increase in Series B+ amount. In 2022, the Middel East and North Africa (MENA) experienced a decline of 19% in Series B+ deal amount and 14% in total VC funding. Over 2018–2022, MENA saw a 96% rise in early-stage funding amount, a 28% growth in Series B+ deal count, and a 113% increase in Series B+ deal amount. In 2022, Oceania experienced a 31% year-on-year decline in Series B+ deal amount, a 10% decline in the number of Series B+ deals, and a 13.6% decline in early-stage funding amount. However, Oceania experienced a 60.7% increase in early-stage funding amount over 2018–2022, the highest of any global region for this period. In sub-Saharan Africa, early-stage funding declined 5.9% and early-stage funding amount by 6.7% from 2021 to 2022. Looking at 2018–2022, early stage funding to the region was up 227% and early stage deal count grew 43.8%. North America’s early-stage funding dropped 26%, and Series A deal count fell 25%, from 2021 to 2022. Regardless, North America is still the world’s leading startup nation, making up 50% of the top 30, plus runners-up ranking. The top three ecosystems have maintained their ranking positions from 2020, with Silicon Valley at the top, followed by New York City and London tied at No. 2. Silicon Valley continues to dominate despite having a reduced market share, with Series A deal amount contracting by 75% and Series B+ by 73% from 2021 to 2022. China’s dominance declined, while India continued to grow: eight Chinese ecosystems fell n in the rankings from last year, including the leading hubs of Beijing, Shanghai, and Shenzhen, while seven Indian ecosystems moved up, including Delhi and Bengaluru-Karnataka, in the top 30, with Mumbai tied at 31. Boston and Beijing both slipped out of the top five to No. 6 and No. 7, respectively, both losing two positions. This has paved the way for Los Angeles to rise to No. 4 and Tel Aviv to No. 5, both gaining two spots. Singapore entered the top 10 for the first time, moving up 10 places to No. 8 from No. 18 in the GSER 2022, the biggest improvement in the rankings. Melbourne moved up an impressive six places from last year, to reach No. 33. The Australian ecosystem grew 43% in Ecosystem Value from the GSER 2022. The top 100 Emerging Ecosystems are collectively worth over $1.5 trillion in Ecosystem Value, a 50% increase from the GSER 2022. Istanbul took the #1 spot in the new Strong Starters ranking, which identifies the top 25 Emerging Ecosystems where early-stage funding activity is most robust. “Given that over half the companies on the 2009 Fortune 500 list launched during a recession or bear market, we know that lean economic times can produce high-performing startups,” said Jonathan Ortmans, president of the Global Entrepreneurship Network, in a statement. “Despite recent downturns in investment, this report foreshadows where we might see the world’s most disruptive and solution-driven companies emerge in the years to come — and provides unparalleled insights that policymakers and community leaders need to build resilient startup ecosystems.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,552
2,023
"EU ‘in touching distance’ of world’s first laws regulating artificial intelligence | Artificial intelligence (AI) | The Guardian"
"https://www.theguardian.com/technology/2023/oct/24/eu-touching-distance-world-first-law-regulating-artificial-intelligence-dragos-tudorache"
"Dragoș Tudorache, MEP who has spent four years drafting AI legislation, is optimistic final text can be agreed by Wednesday US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing World Europe US Americas Asia Australia Middle East Africa Inequality Global development Once adopted by the European parliament, the AI Act could introduce rules for everything from chemical weapons made through AI to copyright theft. Photograph: Jean-François Badias/AP Once adopted by the European parliament, the AI Act could introduce rules for everything from chemical weapons made through AI to copyright theft. Photograph: Jean-François Badias/AP Artificial intelligence (AI) EU ‘in touching distance’ of world’s first laws regulating artificial intelligence Dragoș Tudorache, MEP who has spent four years drafting AI legislation, is optimistic final text can be agreed by Wednesday in Brussels Tue 24 Oct 2023 09.44 EDT The EU is within “touching distance” of passing the world’s first laws on artificial intelligence, giving Brussels the power to shut down services that cause harm to society, says the AI tsar who has spent the last four years developing the legislation. A forthcoming EU AI Act could introduce rules for everything from homemade chemical weapons made through AI to copyright theft of music, art and literature, with negotiations between MEPs, EU member states and the European Commission over final text coming to a head on Wednesday. “Artificial intelligence does have a profound impact on everything we do and therefore it was time to bring in some safeguards and guardrails on how this technology will evolve for the benefit of our citizens” said Dragoș Tudorache, a Romanian MEP and co-rapporteur of the parliamentary committee steering through the legislation, in an exclusive interview with the Guardian. Speaking in his Brussels parliamentary office, Tudorache said: “I’m more optimistic than I am pessimistic about AI. I would be a pessimist if we did nothing about it.” Tudorache said there was a chance he could get a final text agreed for the AI Act by Wednesday. It would then be formally adopted by parliament and, bar any hiccups, become law early next year. “We are in touching distance,” he said. “A good 60-70% of the text is already agreed.” Dragoș Tudorache, an MEP and co-rapporteur of the AI committee in the European parliament: ‘It means AI companies can’t wash away their responsibility.’ One of the remaining areas of contention is the use of AI-powered live facial recognition. Member states want to retain this right, arguing it is vital for security on borders but also to avert public disorder. But MEPs felt real-time facial recognition cameras on streets and public spaces was an invasion of privacy, and voted to remove those clauses. They also voted to remove the right of authorities or employers to use AI-powered emotional recognition technology already used in China , whereby facial expressions of anger, sadness, happiness and boredom as well as other biometric data is monitored to spot tired drivers or workers. Tudorache hinted at a compromise in the making on all the remaining contentious areas. “There is a plausible scenario that we keep talking until the middle of the night and close the file on 25 October,” he said. The handful of subjects that have not yet been agreed were all “intrinsically linked” so there would be no opportunity for simple trading of text between the political interests, he said. Everyone would have to concede something in order for the surveillance clauses to get over the line in a package. Apart from real-time surveillance concerns, one of hottest topics preoccupying regulators are the unknown threats that AI could pose, threats that developers don’t even know about, such as the ability to create pathogens and other biohazards. “You could grow your own little monster in your kitchen,” said Tudorache of AI’s capacity to give members of the public the tools to create biohazards. The person who builds the bomb can be picked up by police under existing laws in all countries. But under the AI Act, the developer or owner of the AI tools will also be accountable and could be fined up to 6% of their revenue or banned from the EU entirely. “It means AI companies can’t wash away their responsibility. They won’t be able to say: ‘Well, it is the user who is responsible for taking my model and doing something bad with it.’ If their AI model is capable of producing something that is illegal, then they will have legal responsibility for it,” said Tudorache. He said that this would be a strong deterrent against the potential downsides of AI. The rules would “also help these companies”, he added. “AI companies themselves say that they see their models as creating very serious risks, some of them to mankind. And I use their own words, not mine,” he said. Sign up to This is Europe Free weekly newsletter The most pressing stories and debates for Europeans – from identity to economics to the environment Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply. after newsletter promotion “There are now AI safety summits left, right and centre, one in London in the next two, three weeks,” he said, referring to an international AI safety summit to be convened by the UK’s prime minister, Rishi Sunak, next week, where organisers plan to delve into the challenge of regulating unknown threats. Increased accountability and transparency that will be required under the AI Act “is not only an obligation that puts a burden on them, I also see it as a good opportunity for them to build confidence in their models” and in the public, Tudorache added. Other elements of the act unlikely to be affected by the final round of negotiations include protection of the creative sector. AI companies will have to submit lists of data sources to the European Commission as part of a regular reporting requirement, which Tudorache hoped would act as a deterrent to the use of data and creative content without recompense. The idea is to enable musicians, scientific researchers or authors to easily see if their work has been plagiarised and give them legal protections. The AI Act will also include obligations for tech companies to regularly publish data on the amount of electricity they consume amid reports it took thousands of computers six months to train ChatGPT. “A training run eats a lot of energy and there is very little public data available to see what the overall toll is,” said Tudorache. Ireland, one of the countries that does track energy usage, reported that electricty consumption by datacentres increased from 5% of the national total in 2015 to 18% in 2022. “I want transparency on energy. Energy is an open market so the AI Act won’t stop energy use, but if there is an onus on companies publishing data on energy use, that way you can build awareness and shape public policy,” he said. Explore more on these topics Artificial intelligence (AI) European Commission European Union Europe news More on this story More on this story Sam Altman ‘was working on new venture’ before sacking from OpenAI 7h ago John Legend and Sia among singers to trial AI versions of voices with YouTube 3d ago Like horses laid off by the car: BT tech chief’s AI job losses analogy draws anger 9 Nov 2023 AI could cause ‘catastrophic’ financial crisis, says Yuval Noah Harari 9 Nov 2023 ‘A kind of magic’: Peter Blake says possibilities of AI are endless for art 5 Nov 2023 Elon Musk unveils Grok, an AI chatbot with a ‘rebellious streak’ 5 Nov 2023 No utopia: experts question Elon Musk’s vision of world without work 3 Nov 2023 ‘Bletchley made me more optimistic’: how experts reacted to AI summit 3 Nov 2023 AI could pose risk to humanity on scale of nuclear war, Sunak warns 2 Nov 2023 When Musk met Sunak: the prime minister was more starry-eyed than a SpaceX telescope 3 Nov 2023 … … Most viewed Most viewed World Europe US Americas Asia Australia Middle East Africa Inequality Global development News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top "
14,553
2,018
"Turing Award 2018: Nobel Prize of computing given to ‘godfathers of AI’ - The Verge"
"https://www.theverge.com/2019/3/27/18280665/ai-godfathers-turing-award-2018-yoshua-bengio-geoffrey-hinton-yann-lecun"
"The Verge homepage The Verge homepage The Verge The Verge logo. / Tech / Reviews / Science / Entertainment / More Menu Expand Menu Tech / Artificial Intelligence / Science ‘Godfathers of AI’ honored with Turing Award, the Nobel Prize of computing ‘Godfathers of AI’ honored with Turing Award, the Nobel Prize of computing / Yoshua Bengio, Geoffrey Hinton, and Yann LeCun laid the foundations for modern AI By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge. | Share this story The 2018 Turing Award, known as the “Nobel Prize of computing,” has been given to a trio of researchers who laid the foundations for the current boom in artificial intelligence. Yoshua Bengio, Geoffrey Hinton, and Yann LeCun — sometimes called the ‘godfathers of AI’ — have been recognized with the $1 million annual prize for their work developing the AI subfield of deep learning. The techniques the trio developed in the 1990s and 2000s enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies, from self-driving cars to automated medical diagnoses. deep learning powers many contemporary applications of AI In fact, you probably interacted with the descendants of Bengio, Hinton, and LeCun’s algorithms today — whether that was the facial recognition system that unlocked your phone, or the AI language model that suggested what to write in your last email. All three have since taken up prominent places in the AI research ecosystem, straddling academia and industry. Hinton splits his time between Google and the University of Toronto; Bengio is a professor at the University of Montreal and started an AI company called Element AI; while LeCun is Facebook’s chief AI scientist and a professor at NYU. “It’s a great honor,” LeCun told The Verge. “As good as it gets in computer science. It’s an even better feeling that it’s shared with my friends Yoshua and Geoff.” Jeff Dean, Google’s head of AI, praised the trio’s achievements. “Deep neural networks are responsible for some of the greatest advances in modern computer science,” said Dean in a statement. “At the heart of this progress are fundamental techniques developed by this year’s Turing Award winners, Yoshua Bengio, Geoff Hinton, and Yann LeCun.” The trio’s achievements are particularly notable as they kept the faith in artificial intelligence at a time when the technology’s prospects were dismal. AI is well-known for its cycles of boom and bust, and the issue of hype is as old as the field itself. When research fails to meet inflated expectations it creates a freeze in funding and interest known as an “AI winter.” It was at the tail end of one such winter in the late 1980s that Bengio, Hinton, and LeCun began exchanging ideas and working on related problems. These included neural networks — computer programs made from connected digital neurons that have become a key building block for modern AI. “There was a dark period between the mid-90s and early-to-mid-2000s when it was impossible to publish research on neural nets, because the community had lost interest in it,” says LeCun. “In fact, it had a bad rep. It was a bit taboo.” “It had a bad rep. It was a bit taboo.” The trio decided they needed to rekindle interest, and secured funding from the Canadian government to sponsor a loose hub of interrelated research. “We organized regular meetings, regular workshops, and summer schools for our students,” says LeCun. “That created a small community that [...] around 2012, 2013 really exploded.” During this period, the three showed that neural nets could achieve strong results on tasks like character recognition. But the rest of the research world did not pay attention until 2012, when a team led by Hinton took on a well-known AI benchmark called ImageNet. Researchers had so far only delivered incremental improvements on this object recognition challenge, but Hinton and his students smashed the next-best algorithm by more than 40 percent with the help of neural networks. “The difference there was so great that a lot of people, you could see a big switch in their head going ‘clunk,’” says LeCun. “Now they were convinced.” Cheap processing power from GPUs (originally designed for gaming) and an abundance of digital data (given off by the internet the same way a car gives off fumes), offered fuel for these little cognitive engines. And since 2012, the basic techniques that Bengio, Hinton, and LeCun pioneered, including backpropagation and convolutional neural networks, have become ubiquitous in AI, and, by extension, in technology as a whole. LeCun says he is optimistic about the prospects of artificial intelligence, but he’s also clear that much more work needs to be done before the field lives up to its promise. Current AI systems need lots of data to understand the world, can be easily tricked, and are only good at specific tasks. “We just don’t have machines with common sense,” says LeCun. If the field is to continue on its upward trajectory, new methods will need to be discovered that are as foundational as those developed by the godfathers of AI. “Whether we’ll able to use new methods to create human-level intelligence, well, there’s probably another 50 mountains to climb, including ones we can’t even see yet” says LeCun. “We’ve only climbed the first mountain. Maybe the second.” OpenAI board in discussions with Sam Altman to return as CEO Sam Altman fired as CEO of OpenAI Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily. From our sponsor Advertiser Content From More from Tech Amazon, Microsoft, and India crack down on tech support scams Amazon eliminated plastic packaging at one of its warehouses Amazon has renewed Gen V for a sophomore season Universal Music sues AI company Anthropic for distributing song lyrics Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved "
14,554
2,022
"Report: Synthetic fraud losses expected to double to nearly $5B by 2024 | VentureBeat"
"https://venturebeat.com/security/report-synthetic-fraud-losses-expected-to-double-to-nearly-5b-by-2024"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Synthetic fraud losses expected to double to nearly $5B by 2024 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Synthetic identity fraud schemes have taken the United States by storm over the past 20 years, and are expected to generate nearly $5 billion in financial losses by 2024. Synthetic fraud is no longer a surprise attack on America’s financial and commerce systems — while the earliest fraudulent identities were completely fake, and many of the patterns used to establish synthetic identities early on are still being used today. Fortunately, the patterns of synthetic fraud are well understood and the profiles of manipulated and fabricated subtypes have been teased out, which will aid in stopping synthetic fraud in its tracks. Socure’s recent report examined years of tagged fabricated synthetic fraud data to identify patterns by name, demographics, habits, location and other behaviors to help guide organizations’ approaches to identifying and combating synthetic fraud. In analyzing the patterns for first names and surnames, it noticed very common choices, which led the company to believe that bad actors are strategically creating synthetic identities to blend in with the population. The goal to “blend in” doesn’t stop at names. It’s a trait shared by the remainder of patterns tied to these fabricated synthetic identities as well, including age, location, residence and more. When combining the most popular first name for synthetic identities with the most popular surname for synthetic identities, we established the most common full name for synthetic identities: Michael Smith. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Synthetic fraud will continue to plague the industry for the next several years. The only way to completely solve for synthetic fraud is to work together as an industry to stop the damage that bad actors are committing against consumers and our financial system. Read the full report from Socure. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,555
2,023
"Amazon launches Bedrock for generative AI, escalating AI cloud wars | VentureBeat"
"https://venturebeat.com/ai/amazon-launches-bedrock-for-generative-ai-escalating-ai-cloud-wars"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon launches Bedrock for generative AI, escalating AI cloud wars Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Yesterday Amazon launched Bedrock for generative AI, a landscape-shaking move that also escalated the cloud AI wars that have been heating up over the past year. Bedrock, a new AWS cloud service, allows developers to build and scale generative AI chatbots and other applications in the cloud, using internal organizational data to fine-tune on a variety of leading pretrained large language models (LLMs) from Anthropic , AI21 and Stability AI , as well as two new LLMs in Amazon’s Titan model family. Amazon CEO Andy Jassy spoke directly about the AWS focus on enterprise AI with Bedrock when speaking to CNBC’s Squawk Box yesterday. “Most companies want to use these large language models, but the really good ones take billions of dollars to train and many years, and most companies don’t want to go through that,” he said. “So what they want to do is they want to work off of a foundational model that’s big and great already and then have the ability to customize it for their own purposes. And that’s what Bedrock is.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Gartner analyst Sid Nag, with the buzz and excitement around generative AI news from Google and Microsoft, Amazon was overdue to follow suit. “Amazon had to do something ,” he told VentureBeat in an interview. “The cloud providers are obviously best suited to handle data-heavy generative AI, because they are the ones that have these hyperscale cloud computing storage offerings.” Bedrock, he explained, provides a meta layer of usability for foundation models on AWS. Amazon is also notably calling out its ability to provide a secure environment for organizations to use this type of AI, he added. “Organizations want to create their own walled garden in a generative AI model, so I think you’ll see more and more of that,” he said. In addition, Amazon’s Code Whisperer announcement, which is a AI-driven coding companion that uses an LLM under the hood and supports Python, Java, JavaScript and other languages, is also a key effort to make sure AWS competes in cloud AI, Nag said. Bedrock’s multiple models makes Amazon’s AWS attractive Emad Mostaque, CEO of Stability AI, pointed out that Bedrock’s offering of multiple models including Stable Diffusion plays to Amazon’s history of focusing on choice. “In his original plan to $100 billion of revenue, Jeff Bezos envisioned that half that revenue would be Amazon products and half third party through their marketplace,” he told VentureBeat in a message. While it may have been surprising that Cohere was not on the list of Bedrock models — it is available on SageMaker and AWS — Cohere CEO Aidan Gomez said the company decided not to participate in the Bedrock product at this time. “We may change our opinion and join the ‘model zoo’ in the future, but we decided not to be a part of this initial release,” he told VentureBeat by email. But Yoav Shoham, cofounder and co-CEO of AI21 Labs, focused on the fact that AWS has curated a set of best-in-class models. “There is a class of text-based applications particularly well served by Jurassic-2’s multilingual, multisized models,” he told VentureBeat by email. “We look forward to enabling, jointly with AWS, the creation of many such applications.” Low-code platform Pega was noted in AWS VP Swami Sivasubramanian’s blog post yesterday as one of Bedrock’s early adoptors. Peter van der Putten, director of the AI lab at Pega, said the company intends to use Bedrock for a range of use cases in their platform, which they will make available to its customers. “For example, just based on a simple sentence such as ‘create a dental insurance claim application,’ we can generate a runnable prototype low-code app including workflow, data models and other artifacts, which will jumpstart, democratize and accelerate development of low-code business applications,” he said. “There are also other areas in our low-code platform where we leverage it, such as allowing users to ask for reports just using natural language.” The desire for multicloud will keep the cloud AI competition going What makes Amazon very attractive for Pega and its customers, he added, is Bedrock’s access to a wide range of models, commercial as well as open source, in “a safe, enterprise-scale manner,” he said. But he also called out the importance of multicloud options: “In addition to this, our clients will also be able to access OpenAI models through Azure, and we are in discussion with other major cloud players as well, plus keeping a close eye on open source, for the most sensitive applications.” That, says Gartner’s Nag, is the irony of the cloud AI wars. “The fundamental premise of building a generative AI model is democratization of data — the more information you have, the higher the fidelity of the response,” he said. “But the whole philosophy and approach that cloud providers have historically taken is ‘I should own everything, everything should run in my estate.’ So on the one hand, they want to be very predatory, but on the other hand, are they willing to share data across multiple estates?” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,556
2,023
"Remote IT management gets a generative AI boost as Atera adds OpenAI Codex | VentureBeat"
"https://venturebeat.com/ai/remote-it-management-gets-a-generative-ai-boost-as-atera-adds-openai-codex"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Remote IT management gets a generative AI boost as Atera adds OpenAI Codex Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today Atera announced that it is integrating OpenAI Codex with its RMM platform, to help users automatically generate scripts to help execute processes. OpenAI Codex is a large language model (LLM) designed to help users with application development. It is a foundational technology that enables the GitHub Copilot service for pair programming. Remote IT monitoring requires executing a variety of tasks In the world of remote monitoring and management (RMM) for IT teams, a lot of tasks need to be executed. Those tasks include system and application management, patch management and resource and storage configuration among others. Today RMM is largely handled by software systems that are increasingly cloud-based. Among the vendors in the space is Israeli firm Atera, which raised $77 million in a Series B round of funding back in 2021 to help advance its efforts. Artificial Intelligence (AI) and machine learning (ML) have long been part of the company’s technology, with algorithms designed to help automate processes and predict potential failures to be remediated. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Connecting different systems and initially setting up some processes however has often required organizations to do some coding to get things to work as desired. That coding effort is about to get a whole lot easier for Atera’s users thanks to AI innovation being integrated from OpenAI. “What we’ve done with OpenAI is we are releasing a system where instead of writing a script, you just write what you want to do,” Gil Pekelman, CEO and founder of Atera, told VentureBeat. “OpenAI gives you the script and that script with a press of a button is automated.” How Atera has been using AI to automate RMM Pekelman explained that since his company’s inception a decade ago, the basic concept of the Atera platform was to enable users to define IT operations and processes so those operations and processes could be automated. The processes include daily operations as well as preventative maintenance. The Atera system itself is an IT management platform that combines the technical parts of remote monitoring, patch management and operation automation with the operational side, which includes help desk and ticketing. On the monitoring side, Pekelman said that the Atera system collects 60,000 data points per second about the state of an IT environment and its applications. All that data is then used by Atera’s AI algorithms which have been designed to forecast when problems are likely to occur, so they can be automatically remediated before they have an impact. “Our 11,000 customers run 200 million IT actions every month,” Pekelman said. “Those are actions to install or fix something, and 99.9% of those actions are automated.” How Atera is using OpenAI Codex to accelerate IT operations While many common IT processes are already defined in the Atera platform, Pekelman noted that many organizations also have unique and specialized requirements. To enable the specialized requirements, organizations had to write their own scripts that would then run on the Atera system. Writing those scripts often required time and effort as well as having the right IT skills in place to understand how things work. With the integration of the OpenAI Codex, the scripting complexity has now been abstracted. Using natural human language , a user just needs to explain what the desired task is and the OpenAI Codex will generate the required script. The OpenAI Codex itself is a large language model trained on a large corpus of development languages and application logic. Pekelman explained that Atera spent months fine-tuning OpenAI Codex, training the system on Atera’s use case so that it could help solve its users’ challenges. “We’ve been working on this for quite a while and it’s pretty accurate,” Pekelman said. He noted that the generated code might sometimes only provide the user with 90% of what they need, while other times it might be 100%. In any case, he emphasized that the OpenAI Codex code will save users countless hours that otherwise would have been spent writing code manually. “What it does is it saves hours and hours for an IT person and it also increases the power of the automation of the system that further saves them more hours of work,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,557
2,023
"As critics circle, Sam Altman hits the road to hype OpenAI | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/as-critics-circle-sam-altman-hits-the-road-to-hype-openai-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages As critics circle, Sam Altman hits the road to hype OpenAI | The AI Beat Share on Facebook Share on X Share on LinkedIn Image by Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI CEO Sam Altman soft-launched a global spring tour with an in-person meeting with Japan’s prime minister yesterday , during which he announced possible plans to open an OpenAI office and expand services in the country. Altman plans a 17-city trek to promote OpenAI — including stops in Toronto, Washington D.C., Rio De Janeiro, Lagos, Madrid, Brussels, Munich, London, Paris, Tel Aviv, Dubai, New Delhi, Singapore, Jakarta, Seoul, Tokyo and Melbourne. The tour comes at a time when OpenAI is being called out on several other fronts. It less then two weeks since a contentious open letter was published, calling for an AI ‘pause,’ signed by Elon Musk, Steve Wozniak and several thousand others. There was Italy’s announcement that it would ban OpenAI’s ChatGPT due to data privacy concerns; a complaint that GPT-4 violates FTC rules; and a ChatGPT bug that exposed security vulnerabilities. And just today, the Biden Administration announced it would examine whether checks need to be placed on AI tools such as ChatGPT, while China released rules for generative AI as Chinese companies Alibaba and Baidu launched their own ChatGPT-like tools. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In last week’s AI Beat, I honed in on the fact that today’s AI discourse has veered towards the political , with all the varying agendas and power-seeking behaviors that go along with that. To that end, as OpenAI comes under greater and greater scrutiny, a ’round-the-world goodwill tour’ — as the Washington Post put it on Sunday — is just the ticket. After all, as regulators start circling, competitors creep closer, and critics get louder, perhaps some political glad-handing is in order. A moment to reflect on OpenAI releases and highlight other news Personally, I was happy that there was a pause on actual tech releases from OpenAI last week. March was completely overwhelming, with barely a moment to contemplate the societal impacts of GPT-4, which was released on March 15, and ChatGPT plugins, which were announced on March 23. It gave me the chance to highlight how enterprise companies are actually implementing these tools. For example, I spoke to Desirée Gosby, VP of emerging technology at Walmart Global Tech, about how Walmart is advancing its conversational AI capabilities using GPT-4. I also talked to Ya Xu, VP of engineering and head of data and AI at LinkedIn, about how the sprint to develop LinkedIn’s recently released generative AI tools took only three months. And yesterday, I took a deep dive into open source AI, which has been having a moment over the past few weeks following a wave of recent large language model (LLM) releases and an effort by startups, collectives and academics to push back on the shift in AI to closed, proprietary LLMs like OpenAI’s GPT-4. Global reaction to OpenAI tour remains to be seen But of course, I’m expecting plenty of fresh OpenAI news coming down the pike. For example, global reaction to Sam Altman’s OpenAI tour remains to be seen. According to Reuters , Japan’s chief cabinet secretary Hirokazu Matsuno said that Japan will consider government adoption of AI technology such as OpenAI’s ChatGPT chatbot if privacy and cybersecurity concerns are resolved. When a reporter asked Matsuno about Italy’s temporary ban on ChatGPT, he said Japan is aware of other countries’ actions and would continue evaluating possibilities of introducing AI to reduce government workers’ workload. The remarks came shortly before Altman met Japanese prime minister Fumio Kishida and said that OpenAI is “looking at opening an office.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,558
2,023
"Mistral AI secures €105M in Europe’s largest-ever seed round"
"https://thenextweb.com/news/mistral-ai-secures-105m-europes-largest-ever-seed-round"
"Toggle Navigation News Events TNW Conference 2024 June 20 & 21, 2024 TNW Vision: 2024 All events Spaces Programs Newsletters Partner with us Jobs Contact News news news news Latest Deep tech Sustainability Ecosystems Data and security Fintech and ecommerce Future of work More Startups and technology Investors and funding Government and policy Corporates and innovation Gadgets & apps Early bird Business passes are 90% SOLD OUT 🎟️ Buy now before they are gone → This article was published on June 14, 2023 Deep tech Mistral AI secures €105M in Europe’s largest-ever seed round Leading VC says new generation of global players will emerge from European ecosystem The artificial intelligence hype shows no sign of fading just yet, and investors are practically falling over themselves to fund the next big thing in AI. Yesterday, Paris-based startup Mistral AI announced it had secured €105mn in what is reportedly Europe’s largest-ever seed round. Mistral AI was founded only four weeks ago, by a trio of AI researchers. Arthur Mensch, the company’s CEO, was formerly employed by Google’s DeepMind. His co-founders, Timothée Lacroix (CTO) and Guillaume Lample (Chief Science Officer), previously worked for Meta. The company has yet to develop its first product. However, on a mission to “make AI useful,” it plans to launch a large language model (LLM) similar to the system behind OpenAI’s ChatGPT in early 2024. A large part of the funds raised will be used towards renting the computer power to train it. The idea is to only use publicly available data to avoid the legal issues and copyright backlash faced by others in the industry. The <3 of EU tech The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now! While Mistral hopes to take on OpenAI with actual open-sourced models and data sets, it is setting itself apart from the Microsoft-backed step-change initiator by targeting enterprises instead of consumers. The company says its goal is to help business clients improve processes around R&D, customer care, and marketing, as well as giving them the tools to build new products with AI. Vision coupled with hands-on experience The funding round is led by Lightspeed Venture Partners. The VC’s partner, Antoine Moyroud, says that Lightspeed has had the opportunity to meet with several talented researchers-turned-founders in AI, but few had a vision beyond the technical field. Mensch, Lacroix and Lample, on the other hand, according to Moyroud, are “part of a group of select few globally who have both the technical understanding required to build out their own vision along with the hands-on experience of training and operating large language models at scale.” Generally, the American VC says it believes that Europe has “a decisive role” to play in the AI field. Just in September last year, the firm opened up new offices in London, Berlin, and Paris, and says it is looking to partner with more European founders of the same ambition behind Mistral. “Our investment in Mistral, and all our portfolio companies in Europe, are evidence of our firm conviction that a new generation of global players will emerge from this ecosystem,” Lightspeed said in a statement. JCDecaux Holding, Motier Ventures, La Famiglia, Headline, Exor Ventures, Sofina, First Minute Capital, and LocalGlobe also participated in the round, along with private investors including French billionaires Rodolphe Saadé and Xavier Niel, as well as former Google CEO Eric Schmidt and French investment bank BpiFrance. Story by Linnea Ahlgren Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climat (show all) Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climate tech, AI and the politics of technology. But first, coffee. Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with Startup company AI Story by Linnea Ahlgren Popular articles 1 New erotic roleplaying chatbots promise to indulge your sexual fantasies 2 UK plan to lead in generative AI ‘unrealistic,’ say Cambridge researchers 3 New AI tool could make future vaccines ‘variant-proof,’ researchers say 4 3D-printed stem cells could help treat brain injuries 5 New technique makes AI hallucinations wake up and face reality Related Articles deep tech AI is transforming the English dictionary data security Deepfake fraud attempts are up 3000% in 2023 — here’s why Join TNW All Access Watch videos of our inspiring talks for free → sustainability 140-year-old ocean heat tech could supply islands with limitless energy ecosystems How Valencia’s fast-growing startup ecosystem is thriving The heart of tech More TNW Media Events Programs Spaces Newsletters Jobs in tech About TNW Partner with us Jobs Terms & Conditions Cookie Statement Privacy Statement Editorial Policy Masthead Copyright © 2006—2023, The Next Web B.V. Made with <3 in Amsterdam. "
14,559
2,023
"Canadian government seeks input on voluntary code of practice for generative AI | VentureBeat"
"https://venturebeat.com/ai/canadian-government-seeks-input-on-voluntary-code-of-practice-for-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Canadian government seeks input on voluntary code of practice for generative AI Share on Facebook Share on X Share on LinkedIn Canada's flag Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The Canadian government plans to consult with the public about the creation of a “voluntary code of practice” for generative AI companies. According to The National Post , a note detailing the consultations was accidentally posted on the government of Canada’s “Consulting with Canadians” website. The posting, spotted by University of Ottawa professor Michael Geist and shared on social media , revealed that engagement with stakeholders started on August 4 and would end on September 14. The voluntary code of practice for gen AI systems will be developed through Innovation, Science and Economic Development Canada (ISED), and aims to ensure that participating firms adopt safety measures, testing protocols and disclosure practices. “ISED officials have begun conducting a brief consultation on a generative AI voluntary code of practice intended for Canadian AI companies with dozens of AI experts, including from academia, industry and civil society, but we don’t have an open link to share for further public consultation,” ISED spokesperson Audrey Champoux said in an email to VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! More information would be released soon, she said. Initial step before binding regulations Originally reported by The Logic , internal documents outlined how the voluntary code of practice would have companies build trust in their systems and transition smoothly to comply with forthcoming regulatory frameworks. This initiative would serve as an initial step before binding regulations are implemented. The code of practice is being developed in consultation with AI companies, academics and civil society to ensure its effectiveness and comprehensiveness. Conservative Party of Canada member of parliament, Michelle Rempel — who leads a multi-party caucus focusing on advanced technologies — expressed surprise at the consultation’s appearance. Rempel emphasized the importance of government engagement with Parliament on a non-partisan basis to avoid polarization on the issue. “Maybe if it was an actual mistake the department will reach out to us … it’s certainly no secret that we exist,” Rempel told the The National Post. In a follow up series of tweets, the Minister of Innovation, Science and Industry François-Philippe Champagne reiterated the need for “new guidelines on advanced generative AI systems.” “These consultations will inform a crucial part of Canada’s next steps on artificial intelligence and that’s why we must take the time to hear from industry experts and leaders,” said Champagne. While I thank the National Post for correcting its article, I still want to make some things clear: ➡️ Canada is a world leader in trusted and responsible AI. It is essential that we create new guidelines on advanced generative AI systems. Guardrails to protect individuals who use AI By committing to these guardrails, companies are encouraged to ensure that their AI systems do not engage in activities that could potentially harm users , such as impersonation or providing improper advice. They are also encouraged to train their AI systems on representative datasets to minimize biased outputs and to employ techniques like “red teaming” to identify and rectify flaws in their systems. The code also emphasizes the importance of clear labeling of AI-generated content to avoid confusion with human-created material and to enable users to make informed decisions. Additionally, companies are encouraged to disclose key information about the inner workings of their AI systems to foster trust and understanding among users. Early support grows, but concerns remain Big tech companies like Google, Microsoft and Amazon responded favorably to the government’s plans, telling The Logic that they would be participating in the consultation process. Amazon supports “effective risk and use case-based guardrails” which gives companies “legal certainty,” its spokesperson Sandra Benjamin told The Logic. Not everyone was satisfied, though. University of Ottawa digital policy expert Geist responded to Champagne’s tweet, calling for more engagement with the “broader public.” Incredible that @FP_Champagne can post a tweet stream on the private generative AI consultation and *still* not include any reference to the importance of hearing from the broader public. https://t.co/GjV2FIMVET The Canadian government’s efforts in the field of gen AI are not limited to voluntary guardrails. The government has also proposed legislation, including the Artificial Intelligence and Data Act (AIDA) , which sets requirements for “high-impact systems.” However, the specific criteria and regulations for these systems will be defined by ISED, and they are expected to come into effect at least two years after the bill becomes law. By developing this code of practice, Canada is taking an active role in shaping the development of responsible AI practices globally. The code aligns with similar initiatives in the United States and the European Union and demonstrates the Canadian government’s commitment to ensuring that AI technology evolves in a way that benefits society as a whole. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,560
2,023
"White House gets AI firms to agree to voluntary safeguards, but not new regulations | VentureBeat"
"https://venturebeat.com/ai/white-house-got-ai-firms-to-agree-to-voluntary-safeguards-but-not-new-regulations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages White House gets AI firms to agree to voluntary safeguards, but not new regulations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, the Biden-⁠Harris Administration announced that it has secured voluntary commitments from seven leading AI companies to manage the short- and long-term risks of AI models. Representatives from OpenAI, Amazon, Anthropic, Google, Inflection, Meta and Microsoft are set to sign the commitments at the White House this afternoon. The commitments secured include ensuring products are safe before introducing them to the public — with internal and external security testing of AI systems before their release as well as information-sharing on managing AI risks. In addition, the companies commit to investing in cybersecurity and safeguards to “protect proprietary and unreleased model weights,” and to facilitate third-party discovery and reporting of vulnerabilities in their AI systems. >>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands. << VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Finally, the commitments also include developing systems such as watermarking to ensure users know what is AI-generated content; publicly reporting AI system capabilities, limitations and appropriate/inappropriate use; and prioritizing research on societal AI risks including bias and protecting privacy. Notably, the companies also commit to “develop and deploy advanced AI systems to help address society’s greatest challenges,” from cancer prevention to mitigating climate change. Mustafa Suleyman, CEO and cofounder of Inflection AI , which recently raised an eye-popping $1.3 billion in funding, said on Twitter that the announcement is a “small but positive first step,” adding that making truly safe and trustworthy AI “is still only in its earliest phase … we see this announcement as simply a springboard and catalyst for doing more.” Meanwhile, OpenAI published a blog post in response to the voluntary safeguards. In a tweet, the company called them “an important step in advancing meaningful and effective AI governance around the world.” AI commitments are not enforceable These voluntary commitments, of course, are not enforceable and do not constitute any new regulation. Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, called the voluntary industry commitments “an important first step,” highlighting the commitment to thorough testing before releasing new AI models, “rather than assuming that it’s acceptable to wait for safety issues to arise ‘in the wild,’ meaning once the models are available to the public. “ Still, since the commitments are unenforceable, he added that “it’s vital that Congress, together with the White House, promptly crafts legislation requiring transparency, privacy protections and stepped-up research on the wide range of risks posed by generative AI. ” For its part, the White House did call today’s announcement “part of a broader commitment by the Biden-Harris Administration to ensure AI is developed safely and responsibly, and to protect Americans from harm and discrimination.” It said the Administration is “currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation.” Voluntary commitments precede Senate policy efforts this fall The industry commitments announced today come in advance of significant Senate efforts coming this fall to tackle complex issues on AI policy and move towards consensus around legislation. According to Senate Majority Leader Chuck Schumer (D-NY), U.S. senators will be going back to school — with a crash course in AI that will include at least nine forums with top experts on copyright , workforce issues, national security, high-risk AI models, existential risks, privacy, and transparency and explainability, as well as elections and democracy. The series of AI “Insight Forums,” he said this week, which will take place in September and October, will help “lay down the foundation for AI policy.” Schumer announced the forums, led by a bipartisan group of four senators, last month, along with his SAFE Innovation Framework for AI Policy. Former White House advisor says voluntary efforts ‘have a place’ Suresh Venkatasubramanian, a White House AI policy advisor to the Biden Administration from 2021-2022 (where he helped develop The Blueprint for an AI Bill of Rights ) and professor of computer science at Brown University, said on Twitter that these kinds of voluntary efforts have a place amidst legislation, executive orders and regulations. “It helps show that adding guardrails in the development of public-facing systems isn’t the end of the world or even the end of innovation. Even voluntary efforts help organizations understand how they need to organize structurally to incorporate AI governance.” He added that a possible upcoming executive order is “intriguing,” calling it “the most concrete unilateral power the [White House has].” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,561
2,022
"Black Hat 2022: Why machine identities are the most vulnerable | VentureBeat"
"https://venturebeat.com/security/black-hat-2022-reveals-why-machine-identities-are-the-most-vulnerable"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Black Hat 2022: Why machine identities are the most vulnerable Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Enterprises are struggling to secure machine identities because hybrid cloud configurations are too complex to manage, leading to security gaps cyberattackers exploit. Adding to the confusion are differences between public cloud providers’ approaches to defining machine-based identities using their native identity access management (IAM) applications. Additionally, due to differences in how IAM and machine identity management are handled across cloud platforms, it can be challenging to enforce zero-trust principles, enabling least-privileged access in a hybrid cloud environment. Managing certificate lifecycles on hybrid cloud deployment models for machine identities is a technical challenge that many enterprise IT teams don’t have the resources to take on. According to Osterman Research, 61% of organizations cannot track certificates and keys across their digital assets. Given how quickly workload-based machine identities can be created, including containers, transaction workflows and virtual machines (VMs), it is understandable that only about 40% of machine identities are being tracked. IAM is becoming more challenging every day as the average employee has, on average, over 30 digital identities , with a typical enterprise having over 45 times more machine identities than human ones. Machine identities are high risk in hybrid clouds Two sessions at the Black Hat 2022 cybersecurity conference explained why machine identities are a high-risk attack surface, made more vulnerable in hybrid cloud configurations. The first session, titled IAM The One Who Knocks , presented by Igal Gofman, head of research at Ermetic and Noam Dahan, research lead at Ermetic. The second was titled I AM whomever I Say I Am: Infiltrating Identity Providers Using a 0Click Exploit, presented by Steven Seeley, a security researcher at the 360 Vulnerability Research Institute. Both presentations provided recommendations on what enterprises can do to reduce the risk of a breach. In the presentation, IAM The One Who Knocks, researchers IGofman and Dahan illustrated how different the dominant cloud platforms’ approaches to IAM are. Protecting machine identities with native IAM support from each public cloud platform just isn’t working, as gaps in hybrid cloud configurations leave machines vulnerable. Their presentation provided insights into what makes Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform’s (GCP) approaches to IAM different. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “IAM systems in all three cloud providers we discussed are complex,” Dahan said during the session. “We find that organizations will make mistakes. One of the most important things you can do is stick to one AWS account or GCP project per workload.” AWS, Microsoft Azure and GCP provide enough functionality to help an organization get up and running yet lack the scale to fully address the more challenging, complex areas of IAM in hybrid cloud configurations. Cloud providers claim their machine identities are secure, yet in hybrid cloud configurations, that breaks down fast. Gofman and Dahan pointed out that enterprises are responsible for breached machine identities because every platform provider defines its scope of services using the shared responsibility model. Steps to secure machine identities Black Hat’s sessions on IAM detailed insights and recommendations on how to better protect machine identities, including the following: Understanding that AWS, Microsoft Azure and Google Cloud Platforms’ IAM systems do not protect privileged access credentials, machine identity, endpoint or threat surface in a hybrid cloud configuration. As the shared responsibility model pictured above illustrates, AWS, Azure and GCP only secure the core areas of their respective platforms, including infrastructure and hosting services only. CISOs and CIOs rely on the shared responsibility model to create enterprise-wide security strategies that will make the least privileged access achievable across hybrid cloud configurations. The eventual goal is to enable a zero-trust security framework enterprise-wide. Hybrid cloud architectures that include AWS, Microsoft Azure and Google Cloud Platforms do not need an entirely new identity infrastructure. Creating new and often duplicate machine identities increases cost, risk, overhead and the burden of requiring additional licenses. On the other hand, enterprises with standardized identity infrastructure need to stay with it. Besides having the taxonomy engrained across their organization, changing it will most likely create errors, leave identities vulnerable and be expensive to fix. Enterprises need to consider IAM platforms that can scale across hybrid cloud configurations to reduce the risk of a breach. The latest generation of IAM systems provides tools for managing machine lifecycles synchronized to certificate management. IAM architectures also support customized scripts for protecting workflow-based identities, including containers, VMs, IoT, mobile devices and more. Leading vendors working to secure IAM for machine identities include Akeyless, Amazon Web Services (AWS), AppViewX, CrowdStrike, Ivanti, HashiCorp, Keyfactor, Microsoft, Venafi and more. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,562
2,022
"Why getting microsegmentation right is key to zero trust | VentureBeat"
"https://venturebeat.com/security/why-getting-microsegmentation-right-is-key-to-zero-trust"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why getting microsegmentation right is key to zero trust Share on Facebook Share on X Share on LinkedIn In just four months, Microsoft has integrated CloudKnox into its Zero Trust architecture. It's an example of what can be accomplished when DevOps teams have a clear security framework to work with, complete with Zero Trust based design objectives. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It is not just the breach — it is the lateral movement that distributes malicious code to destroy IT infrastructures, making zero trust a priority. Many CISOs and business leaders have been in firefights recently as they try to increase the resilience of their tech stacks and infrastructures while containing breaches, malware and access credential abuse. Unfortunately, rapidly expanding attack surfaces, unprotected endpoints, and fragmented security systems make resilience an elusive goal. The mindset that breach attempts are inevitable drives greater zero-trust planning , including microsegmentation. At its core, zero trust is defined by assuming all entities are untrusted by default, least privilege access is enforced on every resource and identity — and comprehensive security monitoring is implemented. Microsegmentation is core to zero trust The goal of network microsegmentation is to segregate and isolate defined segments in an enterprise network, reducing the number of attack surfaces to limit lateral movement. As one of the main elements of zero trust based on the NIST’s zero-rust framework , microsegmentation is valuable in securing IT infrastructure despite its weaknesses in protecting private networks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! IT and security teams need a breach mindset Assuming external networks are a viable threat, hostile and intent on breaching infrastructure and laterally moving through infrastructure is critical. With an assumed breach mindset, IT and security teams can tackle the challenges of eradicating as much implicit trust as possible from a tech stack. Identity management helps with implicit trust in tech stacks , Replacing implicit trust with adaptive and explicit trust is a goal many enterprises set for themselves when they define a zero-trust strategy. Human and machine identities are the security perimeters of any zero-trust network, and identity management needs to provide least privileged access at scale across each. Microsegmentation becomes challenging in defining which identities belong in each segment. With nearly every enterprise having a large percentage of their workload in the cloud, they must encrypt all data at rest in each public cloud platform using different customer-controlled keys. Securing data at rest is a core requirement for nearly every enterprise pursuing a zero-trust strategy today, made more urgent as more organizations migrate workloads to the cloud. Microsegmentation policies must scale across on-premise and the cloud Microsegmentation needs to scale across on-premise, cloud and hybrid clouds to reduce the risk of cyberattackers capitalizing on configuration errors to gain access. It is also essential to have a playbook for managing IAM and PAM permissions by the platform to enforce the least privileged access to confidential data. Gartner predicts that through 2023, at least 99% of cloud security failures will be the user’s fault. Getting microsegmentation right across on-premise and cloud can make or break a zero-trust initiative. Excel at real-time monitoring and scanning Identifying potential breach attempts in real-time is the goal of every security and information event management (SIEM) and cloud security posture management (CSPM) vendor pursuing on their roadmaps. The innovation in the SIEM and CPSM markets is accelerating , making it possible for enterprises to scan networks in real time and identify unsecure configurations and potential breach threats. Leading SIEM vendors include CrowdStrike Falcon, Fortinet, LogPoint, LogRhythm, ManageEngine, QRadar, Splunk, Trellix and others. Challenges of microsegmentation The majority of microsegmentation projects fail because on-premise private networks are among the most challenging domains to secure. Most organizations’ private networks are also flat and defy granular policy definitions to the level that microsegmentation needs to secure their infrastructure fully. The flatter the private network, the more challenging it becomes to control the blast radius of malware, ransomware and open-source attacks including Log4j , privileged access credential abuse and all other forms of cyberattack. The challenges of getting microsegmentation right include how complex implementations can become if they’re not planned well and lack senior management’s commitment. Implementing microsegmentation as part of a zero-trust initiative also faces the following roadblocks CISOs need to be ready for: Adapting to complex workflows in real-time Microsegmentation requires considering the adaptive nature of how organizations get work done without interrupting access to systems and resources in the process. Failed microsegmentation projects generate thousands of trouble tickets in IT service management systems. For example, microsegmentation projects that are poorly designed run the risk of derailing an organization’s zero trust initiative. Microsegmenting can take months of iterations To reduce the impact on users and the organization, it is a good idea to test multiple iterations of microsegmentation implementations in a test region before attempting to take them live. It is also important to work through how microsegmentation will need to adapt and support future business plans, including new business units or divisions, before going live. Cloud-first enterprises value speed over security Organizations whose tech stack is built for speed and agility tend to see microsegmentation as a potential impediment to getting more devops work done. Security and microsegmentation are perceived as roadblocks in the way of devops getting more internal app development done on schedule and under budget. Staying under budget Scoping microsegmentation with realistic assumptions and constraints is critical to keeping funding for an organization’s entire zero-trust initiative. Often, enterprises will tackle microsegmentation later in their zero-trust roadmap after getting an initial set of wins accomplished to establish and grow credibility and trust in the initiative. Adding to the challenge of streamlining microsegmentation projects and keeping them under budget are inflated vendor claims. No single vendor can provide zero trust for an organization out of the box. Cybersecurity vendors misrepresent zero trust as a product, add to the confusion, and can push the boundaries of any zero-trust budget. Prioritizing microsegmentation Traditional network segmentation techniques are failing to keep up with the dynamic nature of cloud and data center workloads, leaving tech stacks vulnerable to cyberattacks. More adaptive approaches to application segmentation are needed to shut down lateral movement across a network. CISOs and their teams see the growing variety of data center workloads becoming more challenging to scale and manage using traditional methods that can’t scale to support zero trust either. Enterprises pursue microsegmentation due to the following factors: Growing interest in zero-trust network access (ZTNA) Concerned that application and service identities aren’t protected with least privileged access, more organizations are looking at how ZTNA can help secure every identity and endpoint. Dynamic networks supporting virtual workforces and container-based security are the highest priorities. Devops teams are deploying code faster than native cloud security can keep up Relying on each public cloud provider’s unique IAM, PAM and infrastructure-as-a-service (IaaS) security safeguards that often include antivirus, firewalls, intrusion prevention and other tools isn’t keeping hybrid cloud configurations secure. Cyberattackers look for the gaps created by relying on native cloud security for each public cloud platform. Quickly improving tools for application mapping Microsegmentation vendors are improving the tools used for application communication mapping, streamlining the process of defining a segmentation strategy. The latest generation of tools helps IT, data center, and security teams validate communication paths and whether they’re secure. Rapid shift to microservices container architecture With the growing reliance on microservices’ container architectures, there is an increasing amount of east-west network traffic among devices in a typical enterprise’s data center. That development is restricting how effective network firewalls can be in providing segmentation. Making Microsegmentation Work In The Enterprise In a recent webinar titled “ The time for Microsegmentation, is now ” hosted by PJ Kirner, CTO and cofounder of Illumio, and David Holmes, senior analyst at Forrester, provided insights into the most pressing things organizations should keep in mind aboutmicrosegmentation. “You won’t really be able to credibly tell people that you did a Zero Trust journey if you don’t do the micro-segmentation,” Holmes said during the webinar.“If you have a physical network somewhere, and I recently was talking to somebody, they had this great quote, they said, ‘The global 2000 will always have a physical network forever.’ And I was like, “You know what? They’re probably right. At some point, you’re going to need to microsegment that. Otherwise, you’re not zero trust.” Kirner and Holmes advise organizations to start small, often iterate with basic policies first, and resist over-segmenting a network. “You may want to enforce controls around, say, a non-critical service first, so you can get a feel for what’s the workflow like. If I did get some part of the policy wrong, a ticket gets generated, etc. and learn how to handle that before you push it out across the whole org,” Holmes said. Enterprises also need to target the most critical assets and segments in planning for microsegmentation. Kirner alluded to how Illumio has learned that matching the microsegmentation style that covers both the location of workloads and the type of environment is an essential step during planning. Given how microservices container architectures are increasing the amount of east-west traffic in data centers, it is a good idea not to use IP addresses to base segmentation strategies on. Instead, the goal needs to be defining and implementing a more adaptive microsegmentation approach that can continuously flex to an organization’s requirements. The webinar alluded to how effective microsegmentation is at securing new assets, including endpoints, as part of an adaptive approach to segmenting networks. Getting microsegmentation right is the cornerstone of a successful zero-trust framework. Having an adaptive microsegmentation architecture that can flex and change as a business grows and adds new business units or divisions can keep a company more competitive while reducing the risk of a breach. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,563
2,023
"How do you make a real-time database faster? Rockset has a few ideas | VentureBeat"
"https://venturebeat.com/data-infrastructure/how-do-you-make-a-real-time-database-faster-rockset-has-a-few-ideas"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How do you make a real-time database faster? Rockset has a few ideas Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Real-time analytics database vendor Rockset today announced an update to its namesake platform that introduces a new architecture designed to help accelerate enterprise use cases. A separation of some basic operations is central to achieving the speed up. Modern data platforms, including data lakehouses , have increasingly separated the compute component where queries are executed from the storage piece where data is stored. But traditionally, compute for data query execution hasn’t been separated from data ingestion. For a real-time database, data needs to be ingested from all sources. Typically, the same compute engine that supports ingest is the same as that which provides the query engine. But, this can lead to performance and latency issues, as well as challenges for executing real-time analytics queries on data. Using the same compute for both ingest and query also means that in the cloud, an organization has to size a compute instance for both types of operations, rather than just optimize for each specific use case. With its latest update, Rockset is now separating the two operations in an approach it refers to as “compute-compute separation.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “With real-time analytics, the data never stops; you’re processing incoming data all the time and also your queries never stop,” Rockset cofounder and CEO Venkat Venkataramani told VentureBeat. “When compute is running on both ingestion and query processing 24/7, it can become too slow, too expensive and too cumbersome to operate — and we now eliminate all of those things.” Open-source RocksDB at the center of compute-compute separation The team behind Rockset has its roots in Meta (formerly Facebook). Among the core technologies that Venkataramani and his cofounders helped build is the open-source RocksDB persistent key-value store. RocksDB is at the foundation of Rockset, providing a base for database storage and ingestion. The new compute-compute separation capabilities also have their roots in new features found in RocksDB that Rockset is enabling in its commercial database platform. Venkataramani explained that Rockset helped develop the RocksDB memtable replicator that can efficiently and reliably duplicate the memory state of data in RocksDB from one compute instance to another. “Now where one machine is doing writes and another machine is doing reads, they still can get real-time access to each other’s state,” Venkataramani explained. “The rest of the Rockset stack has already been built to leverage that in terms of data ingestion and SQL query processing.” Less duplication Replicating the state of a compute instance is not the same as a wholesale replication of data, an attempt to enable real-time data ingestion and data queries. Venkataramani said that a simple “naïve” way of achieving compute-compute separation could be something as basic as using the replicas functionality in a relational database like PostgreSQL. In the PostgreSQL replicas model, an organization can have a primary node performing data ingestion , and then have a replica that is basically serving all queries. Venkataramani explained that, with that approach, ingestion data has been duplicated. This means more data storage, more cost and some latency. “The magic here is that we can do this without duplicating compute, and without duplicating storage,” said Venkataramani. What compute-compute separation enables for enterprise data analytics With compute-compute separation, Venkataramani said an enterprise could have cloud compute instances that are optimized for actual use cases. For example, some organizations might have fewer query compute needs and more data ingest, or vice versa. Without this model, Venkataramani said, organizations would often end up overprovisioning resources to meet the maximum requirement of both ingest and compute. The new Rockset update will also enable better overall reliability of applications with the separation of data ingest from query processing. The approach will also allow for concurrency scaling as query volume grows. Venkataramani explained that if an application is initially provisioned to handle 100 queries a second, but then demand spikes up to 500 queries a second, the isolated query compute engine can spin up new virtual compute instances to handle demand. “Even if there’s a flash flood of data coming in from the data ingestion side, your application query processing will be completely isolated from that, which allows you to build more reliable applications,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,564
2,016
"Google BigQuery now lets you analyze data from Google Sheets | VentureBeat"
"https://venturebeat.com/business/google-bigquery-now-lets-you-analyze-data-from-google-sheets"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google BigQuery now lets you analyze data from Google Sheets Share on Facebook Share on X Share on LinkedIn Working with Google Sheets data in BigQuery. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google is announcing today that its BigQuery cloud service for running SQL-like queries on data can now easily take in data from the Google Sheets cloud-based spreadsheet software and then save query results inside Google Sheets files. And changes to spreadsheets won’t cause problems for BigQuery. “Time after time, we can make changes within our Google Sheets spreadsheet, and BigQuery will automatically pick up the changes next time you run a query against the spreadsheet!” Google BigQuery technical program manager Tino Tereshko wrote in a blog post. If you’re a power user of Sheets, you’ll probably appreciate the ability to do more fine-grained research with data in your spreadsheets. It’s a sensible enhancement for Google to make, as it unites BigQuery with more of Google’s own existing services. Previously, Google made it possible to analyze Google Analytics data in BigQuery. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These sorts of integrations could make BigQuery a better choice in the market for cloud-based data warehouses, which is increasingly how Google has positioned BigQuery. Public cloud market leader Amazon Web Services (AWS) has Redshift but no widely used tool for spreadsheets. Microsoft Azure’s SQL Data Warehouse, which has been in preview for several months , does not currently have an official integration with Microsoft Excel, surprising though it may be. But Google in the past few months has shown signs of caring more about what companies want out of cloud services. In March the company disclosed plans to open data centers in 12 more regions around the world. In December, Google enhanced BigQuery with custom quotas to limit the amount of money a user spends on any given day. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,565
2,023
"3 ways businesses can ethically and effectively develop generative AI models | VentureBeat"
"https://venturebeat.com/ai/3-ways-businesses-can-ethically-and-effectively-develop-generative-ai-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 3 ways businesses can ethically and effectively develop generative AI models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. President Biden is meeting with AI experts to examine the dangers of AI. Sam Altman and Elon Musk are publicly voicing their concerns. Consulting giant Accenture became the latest to bet on AI, announcing plans to invest $3 billion in the technology and double its AI-focused staff to 80,000. That’s on top of other consulting firms, with Microsoft, Alphabet and Nvidia joining the fray. Major companies aren’t waiting for the bias problem to disappear before they adopt AI, which makes it even more urgent to solve one of the biggest challenges facing all of the major generative AI models. But AI regulation will take time. Because every AI model is constructed by humans and trained on data collected by humans, it’s impossible to eliminate bias entirely. Developers should strive, however, to minimize the amount of “real-world” bias they replicate in their models. Real-world bias in AI To understand real-world bias, imagine an AI model trained to determine who is eligible to receive a mortgage. Training that model based on the decisions of individual human loan officers — some of whom might implicitly and irrationally avoid granting loans to people of certain races, religions or genders — poses a massive risk of replicating their real-world biases in the output. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The same goes for models that are meant to mimic the thought processes of doctors, lawyers, HR managers and countless other professionals. >>Follow VentureBeat’s ongoing generative AI coverage<< AI offers a unique opportunity to standardize these services in a way that avoids bias. Conversely, failing to limit the bias in our models poses the risk of standardizing severely defective services to the benefit of some and at the expense of others. Here are three key steps that founders and developers can take to get it right: 1. Pick the right training method for your AI model ChatGPT, for example, falls under the broader category of machine learning as a large language model (LLM) , meaning it absorbs enormous quantities of text data and infers relationships between words within the text. On the user side, that translates into the LLM filling in the blank with the most statistically probable word given the surrounding context when answering a question. But there are many ways to train data for machine learning models. Some health tech models, for example, rely on big data in that they train their AI using the records of individual patients or the decisions of individual doctors. For founders building models that are industry-specific, such as medical or HR AI, such big-data approaches can lend themselves to more bias than necessary. Let’s picture an AI chatbot trained to correspond with patients to produce clinical summaries of their medical presentations for doctors. If built with the approach described above, the chatbot would craft its output based on consulting with the data — in this case, records — of millions of other patients. Such a model might produce accurate output at impressive rates, but it also imports the biases of millions of individual patient records. In that sense, big-data AI models become a cocktail of biases that’s hard to track, let alone fix. An alternative method to such machine-learning methods, especially for industry-specific AI, is to train your model based on the gold standard of knowledge in your industry to ensure bias isn’t transferred. In medicine, that’s peer-reviewed medical literature. In law, it could be the legal texts of your country or state, and for autonomous vehicles, it might be actual traffic rules as opposed to data of individual human drivers. Yes, even those texts were produced by humans and contain bias. But considering that every doctor strives to master medical literature and every lawyer spends countless hours studying legal documents, such texts can serve as a reasonable starting point for building less-biased AI. 2. Balance literature with changing real-world data There’s tons of human bias in my field of medicine, but it’s also a fact that different ethnic groups, ages, socio-economic groups, locations and sexes face different levels of risk for certain diseases. More African Americans suffer from hypertension than Caucasians do, and Ashkenazi Jews are infamously more vulnerable to certain illnesses than other groups. Those are differences worth noting, as they factor into providing the best possible care for patients. Still, it’s important to understand the root of these differences in the literature before injecting them into your model. Are doctors giving women a certain medication at higher rates — as a result of bias toward women — that is putting them at higher risk for a certain disease? Once you understand the root of the bias, you’re much better equipped to fix it. Let’s go back to the mortgage example. Fannie Mae and Freddie Mac, which back most mortgages in the U.S., found people of color were more likely to earn income from gig-economy jobs, Business Insider reported last year. That disproportionately prevented them from securing mortgages because such incomes are perceived as unstable — even though many gig-economy workers still have strong rent-payment histories. To correct for that bias, Fannie Mae decided to add the relevant rent-payment history variable into credit-evaluation decisions. Founders must build adaptable models that are able to balance official evidence-based industry literature with changing real-world facts on the ground. 3. Build transparency into your AI model To detect and correct for bias, you’ll need a window into how your model arrives at its conclusions. Many AI models don’t trace back to their originating sources or explain their outputs. Such models often confidently produce responses with stunning accuracy — just look at ChatGPT’s miraculous success. But when they don’t, it’s almost impossible to determine what went wrong and how to prevent inaccurate or biased output in the future. Considering that we’re building a technology that will transform everything from work to commerce to medical care, it’s crucial for humans to be able to spot and fix the flaws in its reasoning — it’s simply not enough to know that it got the answer wrong. Only then can we responsibly act upon the output of such a technology. One of AI’s most promising value propositions for humanity is to cleanse a great deal of human bias from healthcare, hiring, borrowing and lending, justice and other industries. That can only happen if we foster a culture among AI founders that works toward finding effective solutions for minimizing the human bias we carry into our models. Dr. Michal Tzuchman-Katz, MD, is cofounder, CEO and chief medical officer of Kahun Medical. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,566
2,023
"UserTesting launches machine learning-powered Friction Detection for enhanced behavioral analytics | VentureBeat"
"https://venturebeat.com/ai/usertesting-launches-machine-learning-powered-friction-detection-for-enhanced-behavioral-analytics"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages UserTesting launches machine learning-powered Friction Detection for enhanced behavioral analytics Share on Facebook Share on X Share on LinkedIn (Image Credit: UserTesting) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. UserTesting , a company that helps organizations test their products and services with end users, announced today the latest updates to its Human Insights Platform. These updates include a new feature called Friction Detection. Friction Detection uses machine learning to analyze video recordings of user sessions and identify moments when users encounter difficulty or confusion while performing a task or navigating a workflow. The feature aims to help product designers and developers pinpoint areas that need improvement and enhance the overall user experience. The announcement comes after UserTesting went private in a $1.3 billion deal in October 2022, in which it merged with UserZoom , another user experience testing company. The merger, which was completed on April 3, combined UserTesting’s video-based approach with UserZoom’s various tools for measuring user behavior and feedback. Andy MacMillan, CEO of UserTesting, said in an interview with VentureBeat that the merger would enable the company to offer a more comprehensive view of user experience and generate more data for its machine learning capabilities. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The idea of the platform is to have more transaction volume and more test data, which is really interesting for our machine learning prospects,” he said. UserTesting is one of several new companies that use machine learning to augment human insights and provide more actionable recommendations for product development. Others include FullStory , which analyzes user interactions on websites and apps, and ContentSquare , which tracks user behavior across digital channels. Why friction detection is much more than sentiment analysis There are several reasons why a company would use a service like UserTesting in the first place. Sometimes it’s to learn about how users feel about a product or service. According to MacMillan, often it’s also about understanding the user experience overall. While sentiment is important, as it can identify if a user is happy or perhaps angry about the experience, there are many factors that can lead to that sentiment. For example, if a user experiences friction in a process, that is, some kind of barrier or hurdle that makes the process harder or perhaps less enjoyable to execute, that’s not a good thing. Friction could also potentially mean a user is not able to complete a process like a purchase, which ultimately means less revenue for a vendor. To date, the way that companies found the points of friction was by manually searching for them in a testing video where the user had trouble. But that’s not a scalable approach. MacMillan said that what UserTesting discovered is that with the large volume of data it has, it could build a machine learning model to detect the friction. The model can analyze and determine where it is that the user conducting a test ran into trouble trying to complete a task. The attributes that could indicate friction are excessive scrolling or clicking behavior and other forms of delayed activities that don’t lead the user to the next step in a workflow. “It’s one of these things where we need to boil it down to something simple, which is, the user is frustrated and not finding what they’re looking for,” MacMillan said. “What we’re really doing is helping people to zoom in to those moments.” How friction detection works The UserTesting system has long had an approach known as interactive path flows, which track the user journey as they go through testing. MacMillan said that UserTesting first overlaid basic sentiment analysis on top of the path flow with a color-coded system of red, yellow and green indicating user satisfaction. The next piece is something UserTesting refers to as an intent path. This defines the intent the user has when they are using a service, whether they are shopping or just collecting information. The friction detection is the new piece on top. It identifies where a user is struggling as they go through the path flow. The friction detection machine learning model is a combination of using multiple assets within the UserTesting interactive path flow portfolio and applying an analysis. “The whole goal here is to take a bunch of different assets that we’ve had available in our customer experience narratives and deliver them in a simple straightforward way to somebody who’s maybe not an experienced researcher to show them where people struggled,” MacMillan said. “The power of machine learning and where we’re going is actually to take complicated things to make them feel simple.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "