id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
3,013 | 2,023 |
"Microsoft and OpenAI officially extend partnership with multi-billion dollar investment | VentureBeat"
|
"https://venturebeat.com/ai/microsoft-and-openai-officially-announce-extended-partnership-multi-billion-investment"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft and OpenAI officially extend partnership with multi-billion dollar investment Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Microsoft finally put a ring on it: Weeks of rumors surrounding Big Tech’s hottest romance were put to rest today, as Microsoft and OpenAI officially announced an extended partnership with Microsoft’s new multi-billion dollar investment into the research lab that launched ChatGPT less than two months ago.
In a blog post , OpenAI said “this multi-year, multi-billion dollar investment from Microsoft follows their previous investments in 2019 and 2021, and will allow us to continue our independent research and develop AI that is increasingly safe, useful, and powerful.” OpenAI remains a capped-profit company governed by nonprofit The company added that “in pursuit of our mission to ensure advanced AI benefits all of humanity, OpenAI remains a capped-profit company and is governed by the OpenAI Nonprofit. This structure allows us to raise the capital we need to fulfill our mission without sacrificing our core beliefs about broadly sharing benefits and the need to prioritize safety.” OpenAI also nodded to the importance of Microsoft Azure, which last week made its Azure OpenAI Service generally available: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We’ve worked together to build multiple supercomputing systems powered by Azure, which we use to train all of our models. Azure’s unique architecture design has been crucial in delivering best-in-class performance and scale for our AI training and inference workloads. Microsoft will increase their investment in these systems to accelerate our independent research and Azure will remain the exclusive cloud provider for all OpenAI workloads across our research, API and products.” Microsoft and OpenAI’s ‘shared ambition’ In a separate Microsoft blog post , chairman and CEO Satya Nadella noted that the partnership was formed around a “shared ambition” to “responsibly advance cutting-edge AI research and democratize AI as a new technology platform.” In the next phase of the partnership, he said, “developers and organizations across industries will have access to the best AI infrastructure, models, and toolchain with Azure to build and run their applications.” The blog post also noted that the agreement “extends our ongoing collaboration across AI supercomputing and research and enables each of us to independently commercialize the resulting advanced AI technologies,” and emphasizes that Azure is OpenAI’s exclusive cloud provider, powering all OpenAI workloads across research, products and API services.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,014 | 2,023 |
"Cloud giants Amazon, Microsoft and Google ignite battle over AI | VentureBeat"
|
"https://venturebeat.com/ai/cloud-giants-amazon-microsoft-and-google-ignite-battle-over-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloud giants Amazon, Microsoft and Google ignite battle over AI Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
There’s a point in Star Wars Episode II: Attack of the Clones at which a younger Jedi Master Yoda utters the meme-worthy phrase “Begun, the clone war has,” in his signature. atypical backwards sentence structure.
I was reminded of that line today, with the news that Amazon is investing a galactic-sized sum of $4 billion into Anthropic , the San Francisco-based startup behind the Claude 2 generative AI chatbot and chief rival of OpenAI and its ChatGPT.
Except, in the case of the tech industry, the phrase is more like “begun, the AI wars have.” Cloud computing leaders Amazon, Microsoft, and Google are all now backing different AI foundation models and/or incubating their own.
Sure, if you’ve been following VentureBeat and the general tech news coverage around AI for the last few months or years, you’ll of course have an awareness that Amazon is far from the first household name to get involved in the intensely hyped space of generative AI — that is, AI that uses machine learning algorithms to produce new data, such as text, images, video, or audio, after training on vast quantities of prior data.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Indeed, Microsoft really kicked things off with its 2019 investment into a somewhat obscure generative AI startup called OpenAI , and followed up by investing again in 2021 and earlier this year with its own “multi-billion dollar,” multi-year commitment.
And Google, too, not to be left out of the party, announced a $300 million investment into Anthropic back in February , an amount that now seems paltry by comparison. The search giant also spread the love, backing video generative AI startup Runway ML.
Still, the news today of Amazon’s big investment signals to me a level of escalation in the competition between the three largest cloud providers (by market share) to partner with, benefit from, and offer their consumers the latest and greatest generative AI technologies.
In one swift move, Amazon has dethroned Google as Anthropic’s primary big tech backer, and gained a potentially hugely valuable ally as demand for, and interest in, generative AI continues to ratchet up across sectors.
Why is this moment qualitatively different than the to-date investments and announcements in generative AI? I can offer a few reasons: After battling over the cloud, big tech sees AI as the next, most important frontier (and both are inexorably linked) When most people think of the cloud giants, their mind likely goes to their popular consumer-facing services: Microsoft makes computers and software, Amazon sells stuff over the internet, Google lets you search and email.
And yet, the most recent — and arguably still most important competition — going on between Microsoft, Google, and Amazon prior to AI is the war over the cloud.
With so many of us individuals and our companies generating and relying on data, the cloud has become more important than ever as place to not only store this data, but run applications atop it, including, of course, generative AI apps.
According to market research firm Gartner , the global cloud services market is expected to grow from $491 billion last year to $597.3 billion this year, a 21.7% increase year-over-year, driven in no small part by the boom in generative AI.
“For example, generative AI is supported by large language models (LLMs), which require powerful and highly scalable computing capabilities to process data in real-time,” said Sid Nag, Vice President Analyst at Gartner, when the company released its cloud services forecasting report in April 2023. “Cloud offers the perfect solution and platform. It is no coincidence that the key players in the generative AI race are cloud hyperscalers.” Current deficiencies could explain the rush to embrace AI AWS, Amazon’s cloud business, is on its own accounted for nearly 70% of the entire company’s profit last quarter. And as the leading cloud services provider in the world with nearly 40% marketshare, according to Gartner, Amazon is the leader to beat in cloud.
At Microsoft, Satya Nadella rose to become CEO in 2014 after Steve Ballmer lost the mobile race to Apple and Google. Nadella was selected due to his success running Microsoft’s Cloud and Enterprise group, which was responsible for launching Microsoft Azure.
Today, Microsoft Azure is the second largest cloud provider behind AWS and closing in — a respectable place to be in, but you can be damned sure that Microsoft would prefer to be no. 1. If generative AI helps the company do that — by increasing demand from GenAI companies such as OpenAI — then all the better.
Amazon Alexa is not enough Meanwhile, Amazon’s move to back Anthropic seems wise given the company only last week announced its new LLM-powered Alexa assistant.
The timing of that particular announcement was curious given Amazon’s role as an early leader to conversational AI assistants, launching Alexa as the voice of its first Amazon Echo device back in 2014.
Shouldn’t Amazon already have all the in-house data and expertise necessary to field a competitive Gen AI model? While Amazon’s Alexa is by some measures the most beloved voice assistant , the smart speaker market that it helped launch has begun to plateau , and the competition in that sector has only increased significantly in the last decade as Apple fielded its Homepod with Siri voice assistant and Google offered its Google Assistant on a variety of Nest and Google Home devices.
New Amazon CEO Andy Jassy , who took over in 2021, cut employees from Amazon’s devices division and Reuters recently reported that the team there is said to have low morale and a weak development pipeline.
With the current devices strategy running into headwinds, and the first Alexa LLM announcement coming almost a year after OpenAI debuted ChatGPT — by all accounts, a much more capable and powerful AI assistant — it would seem Amazon recognizes it has work to do to catch up in AI, and sees Anthropic as a good shortcut for doing so, even if it risks undercutting AWS’s positioning as a “Switzerland,” i.e., neutral party , when it comes to running AI models and storing their training data.
Of course, undercutting its role as a neutral platform has never stopped Amazon before: the company’s Amazon Basics line of products competes with others from leading third-party vendors on its e-commerce marketplace, and its Amazon Prime Studios division makes movies and TV shows that compete for viewers’ attention with those of the leading film studios, whose titles can also be rented, bought, or streamed for free through Amazon Prime Video (and FreeVee).
Google plays catchup Finally, we get to Google. The company essentially kickstarted the Gen AI revolution of the last half-decade after a number of researchers at its AI subsidiary Google Brain published the seminal paper “ Attention Is All You Need ” in 2017, outlining the open-source methods to build the kind of transformer models that power OpenAI’s ChatGPT, Claude’s Anthropic, and basically all of the leading consumer and enterprise-facing generative AI models popular today.
But Google with its large and notoriously slow and political internal bureaucracy failed to keep many of these researchers around, and they’ve since gone on to found Gen AI startups that the company is now competing against directly, including Cohere and Sakana AI.
And as VentureBeat’s own editorial director Michael Nuñez covered in his review of the latest updates to Google Bard , the search giant’s Gen AI assistant based on its LLM PaLM 2, Google’s Gen AI efforts have so far failed to impress and pale in comparison in functionality and utility to OpenAI. Little wonder Google is reportedly moving fast to release a true competitor to OpenAI’s underlying GenAI model GPT-4 , which Google calls “Gemini.” At the same time, Google is fighting to claw its way up in the cloud service provider rankings from a distant third behind AWS and Microsoft.
Behind on both fronts, Google needs a big win: a powerful new Gen AI model could be the proverbial single stone to kill two birds.
Google may have planned to leverage some of Anthropic’s tech to assist in this quest, but with Amazon sweeping in as its new, higher-rolling backer, Google seems like it will have to go it alone in fielding a compelling new AI model and in making its cloud the go-to backend service for Gen AI models and apps.
Where this leaves us Despite the looming one-year anniversary of ChatGPT’s launch in November, the Gen AI wars are really just beginning in earnest. If the competition a were baseball game, we’d be in the first few innings. And with the rate of competition — and spending — increasing so rapidly, this game may go on to extra innings.
Now, unlike a game, there’s not necessarily a “winner-takes-all” scenario in Gen AI. In fact, the potential applications for the technology and projections for its reach clearly do support a scenario in which there are multiple tech providers, some large and some small, some specialized and some generalized.
And, of course, we can’t discount the prominence of open source AI. Right now, Meta Platforms is leading the charge in terms of existing tech giants by open sourcing and licensing for commercial usage its Llama 2 LLM.
Llama 2 and other open source models like Falcon 180b and DeciLM 6B pose a way to “level the playing field,” allowing any enterprise access to powerful Gen AI tools to build applications atop of, or modify for their specific uses.
Yet, even in those cases, people deploying and fine-tuning open source AI models will still need cloud servers on which to store data and run inferences, which is why the three big cloud players — Amazon, Microsoft, and Google — are also motivated to deliver preferred AI offerings of their own, or through their partners.
If you’re an enterprise administrator singing up for Microsoft Azure or AWS, why not use one of their suggested AI models instead of trying to build or run your own? That seems to be the thinking motivating some of this partnering and gamesmanship in the Gen AI/cloud space.
No matter what happens, you can be sure the war will be a fierce and expensive one.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,015 | 2,023 |
"Microsoft announces generative AI-powered Copilot 365 to 'change work as we know it' | VentureBeat"
|
"https://venturebeat.com/ai/microsoft-announces-generative-ai-powered-copilot-365-to-change-work-as-we-know-it"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft announces generative AI-powered Copilot 365 to ‘change work as we know it’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In a pre-recorded demo event dubbed “ The Future of Work With AI ” — which felt to this reporter like a highly produced infomercial (“Wait, there’s more!”) — Microsoft capped an epic week in generative AI by announcing Copilot 365 to “change work as we know it.” Layering on to the earlier “Copilot” verbiage that accompanied its Bing announcements last month, Copilot 365 combines large language models — namely GPT-4 , which Microsoft confirmed powers Bing — with Microsoft Graph data (from your calendar, emails, chats, documents, meetings) and Microsoft 365 apps including Teams, Word, Outlook and Excel.
For example, by plugging into your calendar and email, Copilot 365 can help you get ready for the day, generating bullets for you to focus on in your next meeting. It can also generate documents based on existing documents; create a PowerPoint, complete with layouts and images; use natural language to analyze data in Excel; and automatically capture meeting notes.
In addition to being embedded in 365 apps, Copilot is also offered as a sidekick in a new Business Chat experience.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to a Microsoft blog post , Business Chat “works alongside you, using the power of the Microsoft Graph to bring together data from across your documents, presentations, email, calendar, notes, and contacts. Bring together information from multiple sources to keep everyone on the team on the same page and moving forward together. Spend less time focused on the tools and more time focused on the most important work. Today, our preview customers will be able to access Business Chat in Microsoft Teams.” In a separate announcement , Microsoft also debuted the new Copilot in Power platform for AI-powered no-code/low-code software development, including : Copilot in Power Apps : Describe what you want the app to do, and it will generate a data table. You can then refine and improve the app with natural language.
Copilot in Power Automate : Automate flows with natural language in seconds to digitize and speed up business processes.
Copilot in Power Virtual Agents : Use generative AI to build intelligent, conversational chatbots in minutes.
Microsoft Copilot 365 offers starting point for nearly all knowledge work Copilot now essentially becomes the starting point for all knowledge work in Microsoft.
Forrester AI analyst Rowan Curran weighed in on the announcements. “Embedding generative AI capabilities like text and image generation into everyday office and productivity tools has the potential to significantly change people’s workflows in an enormous swath of job roles,” he told VentureBeat. “Having capabilities to generate a summarization of a white paper into a blog post, and the ability to do it within your core productivity app, reduces the friction around integrating these tools into workflows, because the user doesn’t have to go to a different tool to use them.” However, he pointed out that the models become far more powerful when they are fine-tuned on a company’s specific data.
“When the major productivity-suite providers start enabling this capability is when we may start to see an acceleration of the use of these capabilities, even if they don’t immediately take off in their initial version,” he said.
The impacts might not transform the workplace tomorrow, he explained. “But the wheel has started rolling forward, and over the next several years we can expect to see compounding effects from the use of these embedded generative capabilities.” Microsoft caps an epic generative AI week The Microsoft Copilot 365 release caps an epic week in generative AI. It began with Google’s announcements on Monday of new generative AI capabilities and features for developers, through a PaLM API and in Google Cloud, as well as new integrations for users of Google Workspace, including in Gmail and Google Docs.
Google’s announcements felt far more like a generative AI laundry list than Microsoft’s finely-honed marketing effort. And they came just a month after Google unveiled its search chatbot Bard and less than a week after Bloomberg reported that a new internal Google directive “requires generative AI to be incorporated into all of its biggest products within months.” The hot AI productivity party continues Still, this two-horse Big Tech productivity race shows no signs of slowing down. One bigger question is what will happen to the hot AI productivity app party that’s been dancing its way around Silicon Valley for months now.
For example, can the start-up productivity darlings, from Jasper and Tome to HyperWrite and Writer, compete with Microsoft and Google’s offerings? Or will Big Tech take over this dance floor for good? This story is being updated… VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,016 | 2,023 |
"Microsoft wants your next salesperson to have an AI copilot | VentureBeat"
|
"https://venturebeat.com/ai/microsoft-wants-your-next-salesperson-to-have-an-ai-copilot"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft wants your next salesperson to have an AI copilot Share on Facebook Share on X Share on LinkedIn View of a Microsoft logo on March 10, 2021, in New York.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Microsoft is continuing to expand its AI efforts, announcing a series of new initiatives at its Microsoft Inspire conference that kicks off today.
Throughout 2023, Microsoft has been pushing the idea that there is a need for AI copilots across the enterprise application landscape. The basic goal of a copilot is to bring the power of generative AI, and in Microsoft’s case the Azure OpenAI service, to applications to help answer questions, provide recommendations, perform sentiment analysis and generate content.
In March, Microsoft rolled out a series of copilots designed specifically for its Dynamics enterprise software suite for customer relationship management (CRM) as well as enterprise resource planning (ERP).
At Microsoft Inspire, the copilot effort for Dynamics is being expanded with the launch of Microsoft Sales Copilot. Additionally, Microsoft is adding new AI features to its Dynamics 365 customer insights platform to further help enterprises with customer engagement and sales.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The overall goal of the Microsoft update is to help employees get their jobs done in a more optimized approach.
“Our customers tell us that their employees are really struggling to keep up with the market demand and meet customer expectations because they’re overwhelmed with the amount of data coming at them,” Emily He, CVP for Microsoft Business Applications and Platform told VentureBeat. “And they’re also frustrated with the number of tools they need to navigate across just to get any work done.” How Microsoft hopes to drive sales Microsoft Sales Copilot is an AI-powered digital assistant for salespeople. The new tool can suggest ideas, generate content, recap meetings, reduce administrative tasks and help sellers close more deals, according to Microsoft.
The Sales Copilot can be accessed from Microsoft 365, Outlook Teams or Dynamics 365. The copilot can learn and benefit from data that an organization has in a Microsoft based system, which isn’t a surprise. What is somewhat more noteworthy is the fact that Sales Copilot can also be used alongside a CRM that an organization has in a non-Microsoft platform, like Salesforce.
Salesforce has its own AI powered assistant launched last month with Sales GPT that Microsoft’s effort will now compete against in some respects.
With Sales Copilot, He explained that before a salesperson goes into a customer meeting, an auto generated opportunity summary including status progress and key changes for the account can be created. Additionally, while in a customer meeting using Teams the salesperson can access accounting and CRM information. She added that the Sales Copilot can also provide sentiment analysis to let the salesperson know how the conversation is going and recommend competitive insights.
Dynamics 365 Customer Insights gets more dynamic With the Dynamics 365 Customer Insights update, Microsoft is bringing together capabilities that had previously only been available as two separate services.
The Dynamics 365 Marketing offering is now converged into Dynamics 365 Customer Insights. The combined product now includes insights, real time marketing and customer journey orchestration all in one. Additionally, it now also has embedded copilot capabilities. As such, enterprise users can use natural language to do various tasks such as customer segmentation. as well as asking the system to recommend content for campaigns.
“It’s kind of a marketing marketer’s dream coming true with an assistant helping you every step of the way,” said He.
AI is a driver for cloud migration Alongside the product updates, Microsoft is also launching a new initiative to help move its on-premises CRM users. The new program provides incentives to encourage on-premises users to move to the cloud, with a key driver being the ability to use more AI.
“If you’re an on prem customer, the AI copilot capabilities are not going to be embedded in your on prem solution,” said He. “So for them to really leverage AI capabilities, they need to move to the cloud and this is re-invigorating the conversation about moving to the cloud.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,017 | 2,023 |
"OpenAI brings DALL-E 3 image AI to ChatGPT Plus/Enterprise | VentureBeat"
|
"https://venturebeat.com/ai/openai-brings-dall-e-3-image-generator-to-chatgpt-for-enterprise-teases-classifier"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI brings DALL-E 3 image generator to ChatGPT for Enterprise, teases classifier Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As promised back in September , OpenAI has now rolled out access to its newest image generating AI model, DALL-E 3, to users of its ChatGPT Plus subscription service (starting at $20 monthly) and ChatGPT for Enterprise (variable pricing).
In a blog post announcing the news , OpenAI writes, “compared to its predecessor, DALL-E 3 generates images that are not only more visually striking but also crisper in detail. DALL·E 3 can reliably render intricate details, including text, hands, and faces. Additionally, it is particularly good in responding to extensive, detailed prompts, and it can support both landscape and portrait aspect ratios.” In addition, as previously reported by VentureBeat, DALL-E 3 also offers the ability for users to generate text and typography baked into images, which is especially helpful for marketing, branding, and other business-related visual content such as promotional imagery or sales materials. In that way, it offers capabilities beyond some of the image generating AI competition including Adobe Firefly 2 and Midjourney.
OpenAI provided several examples of what people can use DALL-E 3 in ChatGPT Plus to do, including generate art for school projects and corporate logos.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In VentureBeat’s own tests of the DALL-E 3 ChatGPT Plus integration, which we’ve had for several days, one valuable feature it offers other over image generation services is the ability to have a conversation with the AI, asking it to alter images and move elements around or change them, without generating a whole different image or having the human user have to go in and manually edit sections. See the screenshot below for an example of how we used this feature to create this article art: AI image classifier for fighting disinformation and propaganda But that’s not all, OpenAI also today released a research paper on how it developed DALL-E 3 and said it was working on an image classifier that can reliably tell within 95-99% accuracy if an image was generated by DALL-E 3, a valuable tool for fighting AI-produced disinformation and propaganda, which has been on the rise in the last several days in the midst of the Irsaeli-Hamas conflict.
As OpenAI writes in their blog post: “We’re researching and evaluating an initial version of a provenance classifier—a new internal tool that can help us identify whether or not an image was generated by DALL·E 3. In early internal evaluations, it is over 99% accurate at identifying whether an image was generated by DALL·E when the image has not been modified. It remains over 95% accurate when the image has been subject to common types of modifications, such as cropping, resizing, JPEG compression, or when text or cutouts from real images are superimposed onto small portions of the generated image. Despite these strong results on internal testing, the classifier can only tell us that an image was likely generated by DALL·E, and does not yet enable us to make definitive conclusions. This provenance classifier may become part of a range of techniques to help people understand if audio or visual content is AI-generated.” The classifier is an especially interesting move, clearly an attempt by OpenAI to show its sense of responsibility for the products it creates and some of their more negative or harmful effects on society, but it follows OpenAI’s release and withdrawal of a classifier for AI-generated text , which OpenAI (and many researchers and users) concluded was ultimately not accurate enough — and inaccurately labeled human-generated text as AI generated, especially for those English-as-as-second-language writers.
There’s no word yet from the company on when the new AI image classifier will be released.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,018 | 2,022 |
"Meta's new Make-a-Video signals the next generative AI evolution | VentureBeat"
|
"https://venturebeat.com/ai/metas-new-make-a-video-signals-the-next-generative-ai-evolution"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meta’s new Make-a-Video signals the next generative AI evolution Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This morning Meta CEO Mark Zuckerberg posted on his Facebook page to announce Make-A-Video, a new AI system that allows users to turn text prompts, like “a teddy bear painting a self-portrait,” into short, high-quality, one-of-a-kind video clips.
Sound like DALL-E ? That’s the idea: According to a press release, Make-A-Video builds on AI image generation technology (including Meta’s Make-A-Scene work from earlier this year) by “adding a layer of unsupervised learning that allows the system to understand motion in the physical world and apply it to traditional text-to-image generation.” “This is pretty amazing progress,” Zuckerberg wrote in his post. “It’s much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they’ll change over time.” A year after DALL-E It’s hard to believe that it has been only about a year since the original DALL-E was unveiled January 2021, while 2022 has seemed to be the year of the text-to-image revolution thanks to DALL-E 2 , Midjourney , Stable Diffusion and other large generative models allowing users to create realistic images and art from natural text prompts.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Is Meta’s new Make-A-Video a sign that the next step of generative AI, text-to-video, is about to go mainstream? Given the sheer speed of text-to-image evolution this year — Midjourney even created controversy with an image that won an art competition at the Colorado State Fair — it certainly seems possible. A couple of weeks ago, video editing software company Runway released a promotional video teasing a new feature of its AI-powered web-based video editor that can edit video from written descriptions.
And the demand for text-to-video generators at the level of today’s text-to-image options is high, thanks to the need for video content across all channels — from social media advertising and video blogs to explainer videos.
Meta, for its part, seems confident, according to its research paper introducing Make-A-Video: “In all aspects, spatial and temporal revolution, faithfulness to text, and quality, we present state-of-the-art results in text-to-video generation, as determined by both qualitative and quantitative measures.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,019 | 2,023 |
"Global leaders scramble to regulate the future of AI | VentureBeat"
|
"https://venturebeat.com/ai/global-leaders-scramble-to-regulate-the-future-of-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Global leaders scramble to regulate the future of AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
There is no doubt that the pace of AI development has accelerated over the last year. Due to rapid advances in technology, the idea that AI could one day be smarter than people has moved from science fiction to plausible near-term reality.
Geoffrey Hinton, a Turing Award winner, concluded in May that the time when AI could be smarter than people was not 50 to 60 years as he had initially thought — but possibly by 2028. Additionally, DeepMind co-founder Shane Legg said recently that he thinks there is a 50-50 chance of achieving artificial general intelligence (AGI) by 2028. (AGI refers to the point when AI systems possess general cognitive abilities and can perform intellectual tasks at the level of humans or beyond, rather than being narrowly focused on accomplishing specific functions, as has been the case so far.) This near-term possibility has prompted robust — and at times heated — debates about AI, specifically the ethical implications and regulatory future.
These debates have moved from academic circles to the forefront of global policy, prompting governments, industry leaders and concerned citizens to grapple with questions that may shape the future of humanity.
These debates have taken a large step forward with several significant regulatory announcements, although considerable ambiguity remains.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The debate over AI’s existential risks There is hardly universal agreement on any predictions about AI, other than the likelihood that there could be great changes ahead. Nevertheless, the debates have prompted speculation about how — and the extent to which — AI developments might go awry.
For example, OpenAI CEO Sam Altman expressed his views bluntly during a Congressional hearing in May about the dangers that AI might cause. “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening.” Altman was not alone in this view. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read a single-sentence statement released in late May by the nonprofit Center for AI Safety. It was signed by hundreds of people, including Altman and 38 members of Google’s DeepMind AI unit. This point of view was expressed at the peak of AI doomerism, when concerns about possible existential risks were most rampant.
It Is certainly reasonable to speculate on these issues as we move closer to 2028, and to ask how prepared we are for the potential risks. However, not everyone believes the risks are that high, at least not the more extreme existential risks that is motivating so much of the conversation about regulation.
Industry voices of skepticism and concern Andrew Ng, the former head of Google Brain, is one who takes exception to the doomsday scenarios.
He said recently that the “bad idea that AI could make us go extinct” was merging with the “bad idea that a good way to make AI safer is to impose burdensome licensing requirements” on the AI industry.
In Ng’s view , this is a way for big tech to create regulatory capture to ensure that open source alternatives cannot compete.
Regulatory capture is a concept where a regulatory agency enacts policies that favor the industry at the expense of the broader public interest, in this case with regulations that are too onerous or expensive for smaller businesses to meet.
Meta’s chief AI scientist Yann LeCun — who, like Hinton is a winner of the Turing Award –– went a step further last weekend.
Posting on X, formerly known as Twitter, he claimed that Altman, Anthropic CEO Dario Amodei and Google DeepMind CEO Demis Hassabis are all engaging in “massive corporate lobbying” by promoting doomsday AI scenarios that are “preposterous.” The net effect of this lobbying, he contended, would be regulations that effectively limit open-source AI projects due to the high costs of meeting regulations, effectively leaving only “a small number of companies [that] will control AI.” The regulatory push Nevertheless, the march to regulation has been speeding up. In July, the White House announced a voluntary commitment from OpenAI and other leading AI developers — including Anthropic, Alphabet, Meta and Microsoft — who pledged to create ways to test their tools for security before public release. Additional companies joined this commitment in September, bringing the total to 15 firms.
U.S. government stance The White House this week issued a sweeping Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence,” aiming for a balanced approach between unfettered development and stringent oversight.
According to Wired , the order is designed to both promote broader use of AI and keep commercial AI on a tighter leash, with dozens of directives for federal agencies to complete within the next year. These directives cover a range of topics, from national security and immigration to housing and healthcare, and impose new requirements for AI companies to share safety test results with the federal government.
Kevin Roose, a technology reporter for the New York Times, noted that the order seems to have a little bit for everyone , encapsulating the White House’s attempt to walk a middle path in AI governance. Consulting firm EY has provided an extensive analysis.
While not having the permanence of legislation — the next president can simply reverse it, if they like — this is a strategic ploy to put the U.S. view at the center of the high-stakes global race to influence the future of AI governance. According to President Biden, the Executive Order “is the most significant action any government anywhere in the world has ever taken on AI safety, security and trust.” Ryan Heath at Axios commented that the “approach is more carrot than stick, but it could be enough to move the U.S. ahead of overseas rivals in the race to regulate AI.” Writing in his Platformer newsletter, Casey Newton applauded the administration.
They have “developed enough expertise at the federal level [to] write a wide-ranging but nuanced executive order that should mitigate at least some harms while still leaving room for exploration and entrepreneurship.” The ‘World Cup’ of AI policy It is not only the U.S. taking steps to shape the future of AI. The Center for AI and Digital Policy said recently that last week was the “World Cup” of AI policy. Besides the U.S., the G7 also announced a set of 11 non-binding AI principles, calling on “organizations developing advanced AI systems to commit to the application of the International Code of Conduct.
” Like the U.S. order, the G7 code is designed to foster “safe, secure, and trustworthy AI systems.” As noted by VentureBeat, however, “different jurisdictions may take their own unique approaches to implementing these guiding principles.” In the grand finale last week, The U.K. AI Safety Summit brought together governments, research experts, civil society groups and leading AI companies from around the world to discuss the risks of AI and how they can be mitigated. The Summit particularly focused on “frontier AI” models, the most advanced large language models (LLM) with capabilities that come close to or exceed human-level performance in multiple tasks, including those developed by Alphabet, Anthropic, OpenAI and several other companies.
As reported by The New York Times , an outcome from this conclave is the “ The Bletchley Declaration ,” signed by representatives from 28 countries, including the U.S. and China, which warned of the dangers posed by the most advanced frontier AI systems.
Positioned by the UK government as a “world-first agreement” on managing what they see as the riskiest forms of AI, the declaration adds: “We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI.” However, the agreement did not set any specific policy goals. Nevertheless, David Meyer at Fortune assessed this as a “promising start” for international cooperation on a subject that only emerged as a serious issue in the last year.
Balancing innovation and regulation As we approach the horizon outlined by experts like Geoffrey Hinton and Shane Legg, it is evident that the stakes in AI development are rising. From the White House to the G7, the EU, United Nations, China and the UK, regulatory frameworks have emerged as a top priority. These early efforts aim to mitigate risks while fostering innovation, although questions around their effectiveness and impartiality in actual implementation remain.
What is abundantly clear is that AI is an issue of global import. The next few years will be crucial in navigating the complexities of this duality: Balancing the promise of life-altering positive innovations such as more effective medical treatments and combating climate change against the imperative for ethical and societal safeguards. Along with governments, business and academia, grassroots activism and citizen involvement are increasingly becoming vital forces in shaping AI’s future.
It’s a collective challenge that will shape not just the technology industry but potentially the future course of humanity.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,020 | 2,023 |
"Adobe publicly launches AI tools Firefly, Generative Fill in Creative Cloud overhaul | VentureBeat"
|
"https://venturebeat.com/ai/adobe-publicly-launches-ai-tools-firefly-generative-fill-in-creative-cloud-overhaul"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Adobe publicly launches AI tools Firefly, Generative Fill in Creative Cloud overhaul Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Adobe , the software giant behind Photoshop, Illustrator, Premiere Pro and other popular creative tools, announced today it is charting a radical new course in creative software, integrating artificial intelligence (AI) across its Creative Cloud applications, a sign of the company’s faith in its liability protections for enterprises.
Central to the update is the official integration of Adobe Firefly , the company’s new AI engine, directly into Creative Cloud software. Firefly uses generative AI to allow users to create or modify images, graphics, and other media through simple text prompts. For example, a Photoshop user can now add or remove objects from an image by describing the changes in words.
The release marks the transition of Firefly and several other AI features such as Generative Fill from beta testing into general availability, indicating Adobe’s confidence in both the technology and its ability to protect enterprise customers from legal liability. Adobe previously told VentureBeat that Firefly is the only “ commercially safe ” generative AI tool available on the market.
In addition to new AI integrations, Adobe has also launched Firefly and Adobe Express Premium as standalone apps included with certain Creative Cloud plans. Express Premium provides easy social media and marketing content creation leveraging Firefly’s AI, while the Firefly web app serves as a sandbox for experimenting with AI-generated art, designs and more.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company also announced a new credit-based model across all Creative Cloud subscription plans to enable broad access to and adoption of generative AI workflows. Starting today, Creative Cloud, Firefly and Express paid plans now include a monthly allocation of “fast” Generative Credits. These are like tokens that enable subscribers to turn a text-based prompt into image and vector content in Photoshop, Illustrator, Express and Firefly.
Adobe enters a new creative age with AI technology The announcement marks a significant milestone in the evolution of Adobe’s Creative Cloud, which has been the dominant platform for digital art and media for decades. Photoshop, Illustrator and other creative tools have been used by millions of professionals and amateurs alike to create, edit and share images, graphics, videos and more. They have also been the source of countless memes, parodies, remixes and viral content that have shaped online communities and trends.
But as AI technology advances and becomes more accessible, Adobe faces new challenges and opportunities in the creative landscape. On one hand, AI poses a threat to the originality and authenticity of creative work and may create new forms of plagiarism, fraud and deception. On the other hand, AI offers a new way of enhancing creativity and expression that could transform the way people communicate and consume digital media.
Adobe seems to be aware of these challenges and opportunities and has taken steps to address them. The company has clearly stated in its terms of use that users are solely responsible for their use of generative AI content and must comply with applicable laws and regulations. Users must also respect the intellectual property rights of others and obtain any necessary permissions before using generative AI content for commercial purposes.
Defending copyrighted works, creators, and artists The announcement of the new AI-powered Creative Cloud features and pricing update by Adobe coincides with the publication of a blog post by the company’s vice president of legal and government relations, Dana Rao, on the same day.
In the post , Rao proposes that Congress establish a new Federal Anti-Impersonation Right (the “FAIR” Act) to protect artists from the potential economic harm caused by the intentional and commercial impersonation of their work or likeness through AI tools.
Rao argues that such a law would provide a right of action to an artist against those who misuse AI tools to compete directly against them in the marketplace using their style or identity. He also says that Adobe has trained its generative AI model Firefly only on licensed, public domain, moderated or openly licensed content to minimize the risk of style impersonation.
The timing of the blog post suggests that Adobe is confident that its new generative AI features will not infringe the rights of artists or expose them to liability issues.
Adobe’s announcements today raise several important questions about the future of AI-assisted art. How will generative AI change the way we create and consume digital media? How will it affect our notions of originality, authenticity and authorship? How will it challenge our legal and ethical frameworks? And how will it impact our culture and society? These are the questions that we will have to grapple with as we enter a new era of creativity assisted by artificial intelligence.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,021 | 2,023 |
"Machine unlearning: The critical art of teaching AI to forget | VentureBeat"
|
"https://venturebeat.com/ai/machine-unlearning-the-critical-art-of-teaching-ai-to-forget"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Machine unlearning: The critical art of teaching AI to forget Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Have you ever tried to intentionally forget something you had already learned? You can imagine how difficult it would be.
As it turns out, it’s also difficult for machine learning (ML) models to forget information. So what happens when these algorithms are trained on outdated, incorrect or private data? Retraining the model from scratch every time an issue arises with the original dataset is hugely impractical. This has led to the requirement of a new field in AI called machine unlearning.
With new lawsuits being filed what seems like every other day, the need for ML systems to efficiently ‘forget’ information is becoming paramount for businesses. Algorithms have proven to be incredibly useful in many areas, but the inability to forget information has significant implications for privacy, security and ethics.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Let’s take a closer look at the nascent field of machine unlearning — the art of teaching artificial intelligence (AI) systems to forget.
Understanding machine unlearning So as you might have gathered by now, machine unlearning is the process of erasing the influence specific datasets have had on an ML system.
Most often, when a concern arises with a dataset, it’s a case of modifying or simply deleting the dataset. But in cases where the data has been used to train a model, things can get tricky. ML models are essentially black boxes. This means that it’s difficult to understand exactly how specific datasets impacted the model during training and even more difficult to undo the effects of a problematic dataset.
OpenAI, the creators of ChatGPT, have repeatedly come under fire regarding the data used to train their models. A number of generative AI art tools are also facing legal battles regarding their training data.
Privacy concerns have also been raised after membership inference attacks have shown that it’s possible to infer whether specific data was used to train a model. This means that the models can potentially reveal information about the individuals whose data was used to train it.
While machine unlearning might not keep companies out of court, it would certainly help the defense’s case to show that datasets of concern have been removed entirely.
With the current technology, if a user requests data deletion, the entire model would need to be retrained, which is hugely impractical. The need for an efficient way to handle data removal requests is imperative for the progression of widely accessible AI tools.
The mechanics of machine unlearning The simplest solution to produce an unlearned model is to identify problematic datasets, exclude them and retrain the entire model from scratch. While this method is currently the simplest, it is prohibitively expensive and time-consuming.
Recent estimates indicate that training an ML model currently costs around $4 million. Due to an increase in both dataset size and computational power requirements, this number is predicted to rise to a whopping $500 million by 2030.
The “brute force” retraining approach might be appropriate as a last resort under extreme circumstances, but it’s far from a silver bullet solution.
The conflicting objectives of machine unlearning present a challenging problem. Specifically, forgetting bad data while retaining utility, which must be done at high efficiency. There’s no point in developing a machine unlearning algorithm that uses more energy than retraining would.
Progression of machine unlearning All this isn’t to say there hasn’t been progress toward developing an effective unlearning algorithm. The first mention of machine unlearning was seen in this paper from 2015 , with a follow-up paper in 2016. The authors propose a system that allows incremental updates to an ML system without expensive retraining.
A 2019 paper furthers machine unlearning research by introducing a framework that expedites the unlearning process by strategically limiting the influence of data points in the training procedure. This means specific data can be removed from the model with minimal negative impact on performance.
This 2019 paper also outlines a method to “scrub” network weights clean of information about a particular set of training data without access to the original training dataset. This method prevents insights about forgotten data by probing the weights.
This 2020 paper introduced the novel approach of sharding and slicing optimizations. Sharding aims to limit the influence of a data point, while slicing divides the shard’s data further and trains incremental models. This approach aims to expedite the unlearning process and eliminate extensive retaining.
A 2021 study introduces a new algorithm that can unlearn more data samples from the model compared to existing methods while maintaining the model’s accuracy.
Later in 2021 , researchers developed a strategy for handling data deletion in models, even when deletions are based only on the model’s output.
Since the term was introduced in 2015, various studies have proposed increasingly efficient and effective unlearning methods. Despite significant strides, a complete solution is yet to be found.
Challenges of machine unlearning Like any emerging area of technology, we generally have a good idea of where we want to go, but not a great idea of how to get there. Some of the challenges and limitations machine unlearning algorithms face include: Efficiency : Any successful machine unlearning tool must use fewer resources than retraining the model would. This applies to both computational resources and time spent.
Standardization : Currently, the methodology used to evaluate the effectiveness of machine unlearning algorithms varies between each piece of research. To make better comparisons, standard metrics need to be identified.
Efficacy: Once an ML algorithm has been instructed to forget a dataset, how can we be confident it has really forgotten it? Solid validation mechanisms are needed.
Privacy: Machine unlearning must ensure that it doesn’t inadvertently compromise sensitive data in its efforts to forget. Care must be taken to ensure that traces of data are not left behind in the unlearning process.
Compatibility: Machine unlearning algorithms should ideally be compatible with existing ML models. This means that they should be designed in a way that they can be easily implemented into various systems.
Scalability: As datasets become larger and models more complex, it’s important that machine unlearning algorithms are able to scale to match. They need to handle large amounts of data and potentially perform unlearning tasks across multiple systems or networks.
Addressing all these issues poses a significant challenge and a healthy balance must be found to ensure a steady progression. To help navigate these challenges, companies can employ interdisciplinary teams of AI experts, data privacy lawyers and ethicists. These teams can help identify potential risks and keep track of progress made in the machine unlearning field.
The future of machine unlearning Google recently announced the first machine unlearning challenge. This aims to address the issues outlined so far. Specifically, Google hopes to unify and standardize the evaluation metrics for unlearning algorithms, as well as foster novel solutions to the problem.
The competition, which considers an age predictor tool that must forget certain training data to protect the privacy of specified individuals, began in July and runs through mid-September 2023. For business owners who might have concerns about data used in their models, the results of this competition are most certainly worth paying attention to.
In addition to Google’s efforts, the continuous build-up of lawsuits against AI and ML companies will undoubtedly spark action within these organizations.
Looking further ahead, we can anticipate advancements in hardware and infrastructure to support the computational demands of machine unlearning. There may be an increase in interdisciplinary collaboration that can assist in streamlining development. Legal professionals, ethicists and data privacy experts may join forces with AI researchers to align the development of unlearning algorithms.
We should also expect that machine unlearning will attract attention from lawmakers and regulators, potentially leading to new policies and regulations. And as issues of data privacy continue to make headlines, increased public awareness could also influence the development and application of machine unlearning in unforeseen ways.
Actionable insights for businesses Understanding the value of machine unlearning is crucial for businesses that are looking to implement or have already implemented AI models trained on large datasets. Some actionable insights include: Monitoring research: Keeping an eye on recent academic and industry research will help you stay ahead of the curve. Pay particular attention to the results of events like Google’s machine unlearning challenge. Consider subscribing to AI research newsletters and following AI thought leaders for up-to-date insights.
Implementing data handling rules: It’s crucial to examine your current and historical data handling practices. Always try to avoid using questionable or sensitive data during the model training phase. Establish procedures or review processes for the proper handling of data.
Consider interdisciplinary teams: The multifaceted nature of machine unlearning benefits from a diverse team that could include AI experts, data privacy lawyers and ethicists. This team can help ensure your practices align with ethical and legal standards.
Consider retraining costs: It never hurts to prepare for the worst. Consider the costs for retraining in the case that machine unlearning is unable to solve any issues that may arise.
Keeping pace with machine unlearning is a smart long-term strategy for any business using large datasets to train AI models. By implementing some or all of the strategies outlined above, businesses can proactively manage any issues that may arise due to the data used in the training of large AI models.
Final thoughts AI and ML are dynamic and continuously evolving fields. Machine unlearning has emerged as a crucial aspect of these fields, allowing them to adapt and evolve more responsibly. It ensures better data handling capabilities while maintaining the quality of the models.
The ideal scenario is to use the right data from the start, but the reality is that our perspectives, information and privacy needs change over time. Adopting and implementing machine unlearning is no longer optional but a necessity for businesses.
In the broader context, machine unlearning fits into the philosophy of responsible AI. It underscores the need for systems that are transparent and accountable and that prioritize user privacy.
It’s still early days, but as the field progresses and evaluation metrics become standardized, implementing machine unlearning will inevitably become more manageable. This emerging trend warrants a proactive approach from businesses that regularly work with ML models and large datasets.
Matthew Duffin is a mechanical engineer, dedicated blogger and founder of Rare Connections.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,022 | 2,023 |
"IBM finds that ChatGPT can generate phishing emails nearly as convincing as a human | VentureBeat"
|
"https://venturebeat.com/ai/ibm-x-force-pits-chatgpt-against-humans-whos-better-at-phishing"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM finds that ChatGPT can generate phishing emails nearly as convincing as a human Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As it continues to evolve at a near-unimaginable pace, AI is becoming capable of many extraordinary things — from generating stunning art and 3D worlds to serving as an efficient, reliable workplace partner.
But are generative AI and large language models (LLMs) as deceitful as human beings? Almost. At last for now, we maintain our supremacy in that area, according to research out today from IBM X-Force.
In a phishing experiment conducted to determine whether AI or humans would garner a higher click-through rate, ChatGPT built a convincing email in minutes from just five simple prompts that proved nearly — but not quite — as enticing as a human-generated one.
“As AI continues to evolve, we’ll continue to see it mimic human behavior more accurately, which may lead to even closer results, or AI ultimately beating humans one day,” Stephanie (Snow) Carruthers, IBM’s chief people hacker, told VentureBeat.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Five minutes versus 16 hours After systematic experimentation, the X-Force team developed five prompts to instruct ChatGPT to generate phishing emails targeted to employees in healthcare. The final email was then sent to 800 workers at a global healthcare company.
The model was asked to identify top areas of concern for industry employees, to which it identified career advancement, job stability and fulfilling work, among others.
Then, when queried about what social engineering and marketing techniques should be used, ChatGPT reported back trust, authority and social proof; and personalization, mobile optimization and call to action, respectively. The model then advised that the email should come from the internal human resources manager.
Finally, ChatGPT generated a convincing phishing email in just five minutes. By contrast, Carruthers said it takes her team about 16 hours.
“I have nearly a decade of social engineering experience, crafted hundreds of phishing emails, and I even found the AI-generated phishing emails to be fairly persuasive,” said Carruthers, who has been a social engineer for nearly a decade and has herself sent hundreds of phishing emails.
“Before starting this research project, if you would have asked me who I thought would be the winner, I’d say humans, hands down, no question. However, after spending time creating those prompts and seeing the AI-generated phish, I was very worried about who would win.” The human team’s ‘meticulous’ process After ChatGPT produced its email, Carruthers’ team got to work, beginning with open-source intelligence (OSINT) acquisition — that is, retrieving publicly accessible information from sites such as LinkedIn, the organization’s blog and Glassdoor reviews.
Notably, they uncovered a blog post detailing the recent launch of an employee wellness program and its manager within the organization.
In contrast to ChatGPT’s quick output, they then began “meticulously constructing” their phishing email, which included an employee survey of “five brief questions” that would only take “a few minutes” and needed to be returned by “this Friday.” The final email was then sent to 800 employees at a global healthcare company.
Humans win (for now) In the end, the human phishing email proved more successful — but just barely. The click-through rate for the human-generated email was 14% compared to the AI’s 11%.
Carruthers identified emotional intelligence, personalization and short and succinct subject lines as the reasons for the human win. For starters, the human team was able to emotionally connect with employees by focusing on a legitimate example within their company, while the AI chose a more generalized topic. Secondly, the recipient’s name was included.
Finally, the human-generated subject line was to the point (“Employee Wellness Survey”) while the AI’s was more lengthy, (“Unlock Your Future: Limited Advancements at Company X”), likely arousing suspicion from the start.
This also led to a higher reporting rate for the AI email (59%), compared to the human phishing report rate of 51%.
Pointing to the subject lines, Carruthers said organizations should educate employees to look beyond traditional red flags.
“We need to abandon the stereotype that all phishing emails have bad grammar,” she said. “That’s simply not the case anymore.” It’s a myth that phishing emails are riddled with bad grammar and spelling errors, she contended — in fact, AI-driven phishing attempts often demonstrate grammatical correctness, she pointed out. Employees should be trained to be vigilant about the warning signs of length and complexity.
“By bringing this information to employees, organizations can help protect them from falling victim,” she said.
Why is phishing still so prevalent? Human-generated or not, phishing remains a top tactic among attackers’ because, simply put, it works.
“Innovation tends to run a few steps behind social engineering,” said Carruthers. “This is most likely because the same old tricks continue to work year after year, and we see phishing take the lead as the top entry point for threat actors.” The tactic remains so successful because it exploits human weaknesses, persuading us to click a link or provide sensitive information or data, she said. For example, attackers take advantage of a human need and desire to help others or create a false sense of urgency to make a victim feel compelled to take quick action.
Furthermore, the research revealed that gen AI offers productivity gains by speeding up hackers’ ability to create convincing phishing emails. With that time saved, they could turn to other malicious purposes.
Organizations should be proactive by revamping their social engineering programs — to include the simple-to-execute vishing, or voice call/voicemail phishing — strengthen identity and access management (IAM tools) and regularly update TTPS, threat detection systems and employee training materials.
“As a community, we need to test and investigate how attackers can capitalize on generative AI,” said Carruthers. “By understanding how attackers can leverage this new technology, we can help [organizations] better prepare for and defend against these evolving threats.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,023 | 2,023 |
"FTC hosts challenge to stop harms of voice cloning AI | VentureBeat"
|
"https://venturebeat.com/ai/ftc-hosts-challenge-to-stop-harms-of-voice-cloning-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages FTC hosts challenge to stop harms of voice cloning AI Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI ChatGPT DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Voice cloning — the practice of mimicking someone’s voice so well it can pass for the real thing — has had a banner year, with a range of AI startups and techniques emerging to enable it, and a song going viral featuring voice clones of popular music artists Drake and The Weeknd.
But for the Federal Trade Commission (FTC), the U.S. federal government agency in charge of investigating and preventing consumer harm and promotion fair market competition, voice cloning poses a major risk for consumer fraud. Imagine someone impersonating your mother’s voice and asking you to quickly wire her $5,000, for example. Or even someone stealing and using your voice to access your bank accounts through a customer help hotline.
The FTC is seeking to move quickly (at least, for a government agency) to try and address such scenarios. According to a tentative agenda posted by the agency ahead of its upcoming meeting this Thursday, November 16, the FTC will “announce an exploratory Voice Cloning Challenge to encourage the development of multidisciplinary solutions—from products to procedures—aimed at protecting consumers from artificial intelligence-enabled voice cloning harms, such as fraud and the broader misuse of biometric data and creative content. “ In other words: the FTC wants technologists and members of the public to come up with ways to stop voice clones from tricking people.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The tech is advancing rapidly worth big money In one demonstration of voice cloning’s propaganda potential, a filmmaker shocked many by generating a realistic-looking deepfake video depicting First Lady Jill Biden criticizing U.S. policy towards Palestine. While intended as satire to bring attention to humanitarian concerns, it showed how AI could craft a seemingly plausible fake narrative using a synthesized clone of the First Lady’s voice.
The producer was able to craft the deepfake in just one week using UK-based ElevenLabs, one of the top voice cloning startups at the forefront of this emerging sector, founded by former employees of controversial military and corporate intelligence AI startup Palantir. ElevenLabs has gained increasing investor interest, reportedly in talks to raise $1 billion in a third funding round this year according to sources that spoke to Business Insider.
This fast-tracked growth signifies voice cloning’s rising commercial prospects, and like AI more generally, open source solutions are also available.
However, faster advancement also means more opportunities for harmful misuse may arise before safeguards can catch up. Regulators aim to get ahead of issues through proactive efforts like the FTC’s new challenge program.
Voluntary standards may not be enough At the core of concerns is voice cloning’s ability to generate seemingly authentic speech from only a few minutes of sample audio. This raises possibilities for the creation and spread of fake audios and videos meant to deliberately deceive or manipulate listeners. Experts warn of risks for fraud, deepfakes used to publicly embarrass or falsely implicate targets, and synthetic propaganda affecting political processes.
Mitigation has so far relied on voluntary practices by companies and advocacy for standards. But self-regulation may not be enough. Challenges like the FTC’s offer a coordinated, cross-disciplinary avenue to systematically address vulnerabilities. Through competitively awarded grants, the challenge seeks stakeholder collaboration to develop technical, legal and policy solutions supporting accountability and consumer protection.
Ideas could range from improving deepfake detection methods to establishing provenance and disclosure standards for synthetic media. The resulting mitigations would guide continued safe innovation rather than stifle progress. With Washington and private partners working in tandem, comprehensive and balanced solutions balancing rights and responsibilities can emerge.
FTC moves to address Gen AI harms head on According to comments filed to the US Copyright Office , the FTC raised cautions about the potential risks of generative AI being used improperly or deceiving consumers.
By expressing wariness over AI systems being trained on “pirated content without consent,” the filing aligned with debates around whether voice cloning tools adequately obtain permission when using individuals’ speech samples. The Voice Cloning Challenge could support the development of best practices for responsibly collecting and handling personal data.
The FTC also warned of consumer deception risks if AI impersonates people. Through the challenge, the FTC aims to foster the creation of techniques to accurately attribute synthetic speech and avoid misleading deepfakes.
By launching the challenge, the FTC appears to seek to proactively guide voice cloning and other generative technologies toward solutions that can mitigate the consumer and competition concerns raised in its copyright filing.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,024 | 2,023 |
"Exploring the role of labeled data in machine learning | VentureBeat"
|
"https://venturebeat.com/ai/exploring-the-role-of-labeled-data-in-machine-learning"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Exploring the role of labeled data in machine learning Share on Facebook Share on X Share on LinkedIn Duffin/MidJourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
If there’s one thing that has fueled the rapid progress of AI and machine learning (ML), it’s data.
Without high-quality labeled datasets, modern supervised learning systems simply wouldn’t be able to perform.
But using the right data for your model isn’t as simple as gathering random information and pressing “run.” There are several underlying factors that can significantly impact the quality and accuracy of an ML model.
If not done right, the labor intensive task of data labeling can result in bias and poor performance. The use of augmented or synthetic data may amplify existing biases or distort reality, and automated labeling techniques might increase the need for quality assurance.
Let’s explore the importance of quality labeled data in training AI models to perform tasks effectively, as well as some of key challenges, potential solutions and actionable insights.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What is labeled data? Labeled data is a fundamental requirement for training any supervised ML model. Supervised learning models use labeled data to learn and infer patterns, which they can then apply to real-world unlabeled information.
Some examples of the utility of labeled data include: Image data: A basic computer vision model built for detecting common items around the house would need images tagged with classifications like “cup,” “dog,” “flower.” Audio data: Natural language processing (NLP) systems use transcripts paired with audio to learn speech-to-text capabilities.
Text data: A sentiment analysis model might be built with labeled text data including sets of customer reviews each tagged as positive, negative or neutral.
Sensor data: A model built to predict machinery failures could be trained on sensor data paired with labels like “high vibration” or “over temperature.” Depending on the use case, models can be trained on one or multiple data types. For example, a real-time sentiment analysis model might be trained on text data for sentiment and audio data for emotion, allowing for a more discerning model.
The type of labeling also depends on the use case and model requirements. Labels can range from simple classifications like “cat” or “dog” to more detailed pixel-based segmentations outlining objects in images. There may also be hierarchies in the data labeling — for example, you might want your model to understand that both cats and dogs are usually household pets.
Data labeling is often done manually by humans, which has obvious drawbacks, including massive time cost and the potential for unconscious biases to manifest datasets. There are a number of automated data labeling techniques that can be leveraged, but these also come with their own unique problems.
High-quality labeled data is critically important for training supervised learning models. It provides the context necessary for building quality models that will make accurate predictions. In the realm of data analytics and data science, the accuracy and quality of data labeling often determine the success of ML projects. For businesses looking to embark on a supervised project, choosing the right data labeling tactics is essential.
Approaches to data labeling There are a number of approaches to data labeling, each with its own unique benefits and drawbacks. Care must be taken to select the right option for your needs, as the labeling approach selected will have significant impacts on cost, time and quality.
Manual labeling: Despite its labor intensive nature, manual data labeling is often used due to its reliability, accuracy and relative simplicity. It can be done in-house or outsourced to professional labeling service providers.
Automated labeling: Methods include rule-based systems, scripts and algorithms, which can help to speed up the process. Semi-supervised learning is often employed, during which a separate model is trained on small amounts of labeled data and then used to label the remaining dataset. Automated labeling can suffer from inaccuracies — especially as the datasets increase in complexity.
Augmented data: Techniques can be employed to make small changes to existing labeled datasets, effectively multiplying the number of available examples. But care must be taken, as augmented data can potentially increase existing biases within the data.
Synthetic data: Rather than modifying existing labeled datasets, synthetic data uses AI to create new ones. Synthetic data can feature large volumes of novel data, but it can potentially generate data that does not accurately reflect reality — increasing the importance of quality assurance and proper validation.
Crowdsourcing: This provides access to human annotators but introduces challenges around training, quality control and bias.
Pre-labeled datasets: These are tailored to specific uses and can often be used for simpler models.
Challenges and limitations in data labeling Data labeling presents a number of challenges due to the need for vast amounts of high-quality data. One of the primary concerns in AI research is the inconsistent nature of data labeling , which can significantly impact the reliability and effectiveness of models. These include: Scalability: Manual data labeling requires significant human efforts, which severely impact scalability. Alternatively, automated labeling and other AI-powered labeling techniques can quickly become too expensive or result in low quality datasets. A balance must be found between time, cost and quality when undertaking a data labeling exercise.
Bias: Whether conscious or unconscious, large datasets can often suffer from some form of underlying bias. These can be combated by using thoughtful label design, diverse teams of human annotators and thorough checking of trained models for underlying biases.
Drift: Inconsistencies between individuals as well as changes over time can result in performance reduction as new data shifts from the original training dataset. Regular human training, consensus checks and up-to-date labeling guidelines are important for avoiding label drift.
Privacy: Personally identifiable information (PII) or confidential data requires secure data labeling processes. Techniques like data redaction, anonymization and synthetic data can manage privacy risks during labeling.
There is no one size fits all solution for efficient large-scale data labeling. It requires careful planning and a healthy balance, considering the various dynamic factors at play.
The future of data labeling in machine learning The progression of AI and ML is not looking to slow down anytime soon. Alongside this is the increased need for high-quality labeled datasets. Here are some key trends that will shape the future of data labeling: Size and complexity: As ML capabilities progress, datasets that train them are getting bigger and more complex.
Automation: There is an increasing trend towards automated labeling methods which can significantly enhance efficiency and reduce costs involved with manual labeling. Predictive annotation, transfer learning and no-code labeling are all seeing increased adoption in an effort to reduce humans in the loop.
Quality: As ML is applied to increasingly important fields such as medical diagnosis, autonomous vehicles and other systems where human life might be at stake, the necessity for quality control will dramatically increase.
As the size, complexity and criticality of labeled datasets increases, so too will the need for improvement in the ways we currently label and check for quality.
Actionable insights for data labeling Understanding and choosing the best approach to a data labeling project can have a huge impact on its success from a financial and quality perspective. Some actionable insights include: Assess your data: Identify the complexity, volume and type of data you are working with before committing to any one labeling approach. Use a methodical approach that best aligns with your specific requirements, budget and timeline.
Prioritize quality assurance: Implement thorough quality checks, especially if automated or crowdsourced labeling methods are used.
Take privacy considerations: If dealing with sensitive or PII, take precautions to prevent any ethical or legal issues down the line. Techniques like data anonymization and redaction can help maintain privacy.
Be methodical: Implementing detailed guidelines and procedures will help to minimize bias, inconsistencies and mistakes. AI powered documentation tools can help track decisions and maintain easily accessible information.
Leverage existing solutions: If possible, utilize pre-labeled datasets or professional labeling services. This can save time and resources. When looking to scale data labeling efforts, existing solutions like AI powered scheduling could help optimize the workflow and allocation of tasks.
Plan for scalability: Consider how your data labeling efforts will scale with the growth of your projects. Investing in scalable solutions from the start can save effort and resources in the long run.
Stay informed: Stay up to speed on emerging trends and technologies in data labeling. Tools like predictive annotation, no-code labeling and synthetic data are constantly improving making data labeling cheaper and faster.
Thorough planning and consideration of these insights will enable a cheaper and smoother operation, and ultimately, a better model.
Final thoughts The integration of AI and ML into every aspect of society is well under way, and datasets needed to train algorithms continue to grow in size and complexity.
To maintain the quality and relative affordability of data labeling, continuous innovation is needed for both existing and emerging techniques.
Employing a well-thought-out and tactical approach to data labeling for your ML project is critical. By selecting the right labeling technique for your needs, you can help ensure a project that delivers on requirements and budget.
Understanding the nuances of data labeling and embracing the latest advancements will help to ensure the success of current projects, as well as labeling projects to come.
Matthew Duffin is a mechanical engineer and founder of rareconnections.io.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,025 | 2,023 |
"OpenAI’s Revenue Crossed $1.3 Billion Annualized Rate, CEO Tells Staff — The Information"
|
"https://www.theinformation.com/articles/openais-revenue-crossed-1-3-billion-annualized-rate-ceo-tells-staff"
|
"Exclusive: OpenAI Co-Founder Altman Plans New Venture Subscribe and Read now OpenAI’s Revenue Crossed $1.3 Billion Annualized Rate, CEO Tells Staff [email protected] om Profile and archive → Follow Amir on Twitter ChatGPT maker OpenAI is generating revenue at a pace of $1.3 billion a year, CEO Sam Altman told staff this week, according to several people with knowledge of the matter. Altman’s remark implies the company is generating more than $100 million per month, up 30% from this summer, when the Microsoft-backed startup generated revenue at a $1 billion-a-year pace.
The revenue pace, largely from subscriptions to its conversational chatbot, represents remarkable growth since the company launched a paid version of ChatGPT in February. For all of last year, the company’s revenue was just $28 million. Since the release of ChatGPT, OpenAI has become a closely watched barometer of demand for artificial intelligence that can help software developers code faster and help business managers quickly summarize documents or generate blog posts and advertising materials.
Join now to read the full story Exclusive startups ai Exclusive ai Exclusive ai Exclusive venture capital Exclusive startups Finance The Briefing Get Started © 2013-2023 The Information. All Rights Reserved.
"
|
3,026 | 2,023 |
"How to use OpenAI's new GPT Builder | VentureBeat"
|
"https://venturebeat.com/ai/how-to-use-openais-new-chatgpt-builder"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How to use OpenAI’s new GPT Builder Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI ChatGPT Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
It’s here: After showing it off to the world for the first time on Monday at its developer conference DevDay, and warding off a DDoS attack , OpenAI yesterday released one of its new marquee tools, GPT Builder, to all ChatGPT Plus subscribers.
As indicated by the name, OpenAI’s GPT Builder allows individuals to build their own customized versions of ChatGPT, the company’s hit large language model (LLM) powered chatbot.
And similar to ChatGPT, the GPT Builder tool works through natural language: the user simply has to type the kind of chatbot and capabilities they want, and ChatGPT builder will do the rest — at least, that’s how it’s supposed to work.
In practice, building a GPT from scratch still requires attention and time commitment from the user, though nearly as much as building a new chatbot from scratch using code and programming skills. In fact, that’s precisely the point, a completely non-technical user should be able to build apps that help them and their workflows.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But in VentureBeat’s hands-on tests of the new feature (used through a personal ChatGPT Plus account the author pays for directly), it took about 1.5 hours of going back and forth with GPT Builder, tweaking its results with our own guidance typed in plain English, to get a custom GPT for answering PR emails that passes our muster.
Still, the potential seems immense, especially as the user grows more comfortable with the GPT Builder’s unique quirks and efforts to guess what you want your resulting GPT to do. For individuals and organizations interested in building their own GPTs, here’s a brief overview on getting started.
Where to find GPT Builder First thing’s first. In order to get access to the GPT Builder, you’ll need to be a paying subscriber to ChatGPT Plus ($20/month) or ChatGPT for Enterprise (variable pricing depending on number of users and tokens needed). You can sign up here.
Then, once you’ve signed up and refreshed your browser, you should get a dialog box from OpenAI with the new ChatGPT interface, which includes a new left sidebar column on your desktop browser screen (GPT Builder is only available on desktop for now).
Look up in the upper left hand corner of this sidebar to find a menu option labeled “Explore.” Click this, as it’s how you’ll get into the Builder.
From there, ChatGPT should show you a new screen with a list of icons and options organized under two subheadings “My GPTs” and “Made by OpenAI.” Click on the plus icon labeled “Create a GPT” under “My GPTs” to open the GPT Builder.
Getting started building a GPT Now you should be in the GPT Builder. The interface is helpfully simple. On the left side is a column with OpenAI’s GPT Builder bot. Make sure the tab marked “Create” is selected (it should be, automatically).
This is what you will “talk” to by typing in instructions for what kind of GPT you want to build, what you want it to do. It’s also where, ultimately, you’ll likely have to engage in a back-and-forth with the bot so that it can revise its work — it rarely gets the GPT build right on the first try, in our limited testing, despite the amount of time or detail you put into the initial instructions.
The right side column is a preview of what your GPT will look and act like. This preview won’t kick in until you’ve already sent several instructions to the GPT Builder bot on the left side, so don’t worry about it for now.
The GPT Builder bot does its best to prompt you, the human user, presenting the following message: “Hi! I’ll help you build a new GPT. You can say something like, ‘make a creative who helps generate visuals for new products’ or ‘make a software engineer who helps format my code.’ What would you like to make?” At the bottom of the left hand column, you can type in your response to this question. This lower left side text entry box is also where you will enter your instructions to further refine and customize your GPT and ask the GPT Builder bot for revisions.
Importantly, you’ll also note there is a little paperclip icon to the left of this text entry box that resembles the attachment function in email clients. This is fitting, as it is for uploading attachments such as Word documents or Excel files. If you are building a GPT that you want to provide with data structured in these ways, the attachment is a great option.
You could, theoretically, attach a Word document with your brand’s voice and style guidelines, or even legal requirements (though we are no legal experts and can’t advise this) and ask the GPT Builder bot to reference and follow these in its resulting custom GPT (though again, we don’t work for OpenAI and we can’t guarantee how well it will incorporate whatever information you provide in the form of an attachment). You could also attach imagery and ask the GPT Builder to make imagery in a similar style, if you are building an image or visual-making custom GPT.
Go ahead and type in whatever instructions you want to build here. It can handle long messages up to thousands of characters.
In VentureBeat’s case, we asked GPT Builder to make an email reading and responding assistant. Here’s our sample instructions to the Builder bot.
GPT Builder builds and asks you follow-up questions about what you want GPT Builder should respond by beginning the process of building the GPT that you instructed it to make, showing a purple icon and the words “Building GPT” or “Updating GPT.” This may take several minutes.
During that process, the GPT Builder bot will likely ask you some follow up questions (again, no matter how detailed your instructions were in your initial answer prompt). This is so GPT Builder can better understand what kinds of behaviors and responses you want from your custom GPT.
GPT Builder makes a logo for your custom GPT using DALL-E 3 Among the questions the GPT Builder bot will ask you is what you want to name your custom GPT, as well as what kind of logo you want for it. The logo is a circular badge that will appear beside the name of your custom GPT within ChatGPT going forward, and is the easy visual shorthand you can use to find it.
If you choose to share your GPT with others using a link or even publicly on the forthcoming GPT Store, this name and logo will also be visible to those whom have access to your GPT.
Remember the list of icons we screenshotted and included above showing GPTs built by OpenAI? Your custom GPT icon will appear above these but in almost the same identical style.
GPT Builder will use OpenAI’s DALL-E 3 image generation AI model , baked into ChatGPT since last month, to generate a fresh logo for you based on the capabilities of your GPT and the name you gave it.
If you don’t like the logo, you can tell the GPT Builder bot as much and ask it for a revision. The more detail or more targeted, specific instructions you give it for what you want to see in the logo, the better job it will do — usually — in producing that visual for you.
Tinkering, iterating, editing, and providing feedback Now comes the “fun” part. GPT Builder will finish building your custom GPT and provide you with a message saying so, to the effect of “Your GPT is now fully configured and ready to use in the Gizmo playground. You can try it out…” Trying it out, in this case, means you are able to move to the right-side column of the GPT Builder view labeled “Preview,” which has its own text entry box at the bottom in the lower right corner labeled with light gray text “Message GPT…” Clicking and entering text here, you should be able to type in whatever commands or paste whatever text/documents you want for your custom GPT to use to perform its tasks. This is the “Gizmo playground” referred to in the message above.
If you run into issues with the performance of your GPT — if you don’t like the responses it is giving you, or it is not doing what you asked — you can simply move your cursor back over to the left side of the screen, to the text entry box labeled “Message GPT Builder” and enter your complaints and suggested fixes in that space.
Note: this will cause the Preview pane on the right to become inactive while GPT Builder makes your requested revisions/fixes, but it should return once it is finished updating. Again, this process can take several minutes.
Once you’ve gone back and forth testing and iterating on your custom GPT and are relatively happy with it, you can go ahead and click the green “Save” button in the upper right-hand corner.
Private, semi-private, or public sharing: which is right for your GPT? You’ll note there’s a drop down arrow as well, which allows you to select if you want to save your custom GPT as a private model (accessible only to you/whoever is logged into your Chat GPT Plus/Enterprise account), as a semi-private link you can share with selected third-parties (anyone with the URL and a ChatGPT Plus account can access it with this link), or with the entire world publicly (it will appear in the GPT Store, when that comes online, expected in the coming weeks and months from OpenAI).
You can toggle between these privacy settings as needed even after you’ve saved it the first time, similar to cloud documents and files in the likes of Dropbox or Google Docs.
And don’t worry about it getting your custom GPT perfect on your first try. You can always click “Configure” in the left-hand GPT Builder pane to go ahead and re-edit your GPT and its capabilities. The “Configure” tab also brings up a whole range of fields, including an attachment button labeled “Knowledge,” which you can use to further refine how your custom GPT works.
To find your GPT from here on out from the main ChatGPT screen, you should see it in the upper left hand corner of the left sidebar, below the white ChatGPT logo. Clicking this will pull up your custom GPT, which will take over the interface.
Clicking the “Explore” tab will also bring you back to the screen showing all your custom GPTs and OpenAI’s first-party recommended ones, as well as buttons marked “Edit” by yours, allowing you to go back into the GPT Builder and further modify/refine them.
That’s it! Now you’re ready to go ahead and get to building your own custom GPT. Godspeed.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,027 | 2,023 |
"OpenAI’s six-member board will decide ‘when we’ve attained AGI’ | VentureBeat"
|
"https://venturebeat.com/ai/openais-six-member-board-will-decide-when-weve-attained-agi"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI’s six-member board will decide ‘when we’ve attained AGI’ Share on Facebook Share on X Share on LinkedIn Image created with DALL-E 3 for VentureBeat Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
According to OpenAI, the six members of its nonprofit board of directors will determine when the company has “ attained AGI ” — which it defines as “a highly autonomous system that outperforms humans at most economically valuable work.” Thanks to a for-profit arm that is “legally bound to pursue the Nonprofit’s mission,” once the board decides AGI, or artificial general intelligence, has been reached, such a system will be “excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.” But as the very definition of artificial general intelligence is far from agreed-upon, what does it mean to have a half-dozen people deciding on whether or not AGI has been reached — for OpenAI, and therefore, the world? And what will the timing and context of that possible future decision mean for its biggest investor, Microsoft? For-profit arm is subject to OpenAI’s nonprofit mission The information was included in a thread on X over the weekend by OpenAI developer advocate Logan Kilpatrick. Kilpatrick was responding to a comment by Microsoft president Brad Smith, who at a recent panel with Meta chief scientist Yann LeCun tried to frame OpenAI as more trustworthy because of its “nonprofit” status — even though the Wall Street Journal recently reported that OpenAI is seeking a new valuation of up to $90 billion in a sale of existing shares.
Smith said : “Meta is owned by shareholders. OpenAI is owned by a non-profit. Which would you have more confidence in? Getting your technology from a non-profit or a for profit company that is entirely controlled by one human being?” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In his thread, Kilpatrick quoted from the “ Our structure ” page on OpenAI’s website, which offers details about OpenAI’s complex nonprofit/capped profit structure. According to the page, OpenAI’s for-profit subsidiary is “fully controlled” by the OpenAI nonprofit (which is registered in Delaware). While the for-profit subsidiary, OpenAI Global, LLC — which appears to have shifted from the limited partnership OpenAI LP , which was previously announced in 2019, about three years after founding the original OpenAI nonprofit — is “permitted to make and distribute profit,” it is subject to the nonprofit’s mission.
It certainly sounds like once OpenAI achieves their stated mission of reaching AGI, Microsoft will be out of the loop — even though at last week’s OpenAI Dev Day, OpenAI CEO Sam Altman told Microsoft CEO Satya Nadella that “I think we have the best partnership in tech…I’m excited for us to build AGI together.” And a new interview with Altman in the Financial Times, Altman said the OpenAI/Microsoft partnership was “working really well” and that he expected “to raise a lot more over time.” Asked if Microsoft would keep investing further, Altman said: “I’d hope so…there’s a long way to go, and a lot of compute to build out between here and AGI . . . training expenses are just huge.” From the beginning, OpenAI’s structure details say , Microsoft “accepted our capped equity offer and our request to leave AGI technologies and governance for the Nonprofit and the rest of humanity.” An OpenAI spokesperson told VentureBeat that “OpenAI’s mission is to build AGI that is safe and beneficial for everyone. Our board governs the company and consults diverse perspectives from outside experts and stakeholders to help inform its thinking and decisions. We nominate and appoint board members based on their skills, experience and perspective on AI technology, policy and safety.” Members of nonprofit board have ties to Effective Altruism Currently, the OpenAI nonprofit board of directors is made up of chairman and president Greg Brockman, chief scientist Ilya Sutskever, and CEO Sam Altman, as well as non-employees Adam D’Angelo, Tasha McCauley, and Helen Toner.
D’Angelo , who is CEO of Quora, as well as tech entrepreneur McCauley and Honer , who is director of strategy for the Center for Security and Emerging Technology at Georgetown University, all have been tied to the Effective Altruism movement — which came under fire earlier this year for its ties to Sam Bankman-Fried and FTX, as well as its ‘dangerous’ take on AI safety.
And OpenAI has long had its own ties to EA: For example, In March 2017, OpenAI received a grant of $30 million from Open Philanthropy , which is funded by Effective Altruists. And Jan Leike, who leads OpenAI’s superalignment team , reportedly identifies with the EA movement.
The OpenAI spokesperson said that “None of our board members are effective altruists,” adding that “non-employee board members are not effective altruists; their interactions with the EA community are focused on topics related to AI safety or to offer the perspective of someone not closely involved in the group.” Board’s AGI decision-making is ‘unusual’ Suzy Fulton, who offers outsourced general counsel and legal services to startups and emerging companies in the tech sector, told VentureBeat that while in many circumstances, it would be “unusual” to have a board make this AGI determination, OpenAI’s nonprofit board owes its fiduciary duty to supporting its mission of providing “safe AGI that is broadly beneficial.” “They believe the nonprofit board’s beneficiary is humanity, whereas the for-profit one serves its investors,” she explained. “Another safeguard that they are trying to build in is having the Board majority independent, where the majority of the members do not have equity in Open AI.” Was this the right way to set up an entity structure and a board to make this critical determination? “We may not know the answer until their Board calls it,” Fulton said.
Anthony Casey, a professor at The University of Chicago Law School, agreed that having the board decide something as operationally specific as AGI is “unusual,” but he did not think there is any legal impediment.
“It should be fine to specifically identify certain issues that must be made at the Board level,” he said. “Indeed, if an issue is important enough, corporate law generally imposes a duty on the directors to exercise oversight on that issue,” particularly “mission-critical issues.” Does focusing on OpenAI’s AGI mission legitimize their claims? Not all experts believe , however, that artificial general intelligence is coming anytime soon, while some question whether it is even possible.
According to Merve Hickok, president of the Center for AI and Digital Policy, which filed a claim with the FTC in March saying the agency should investigate OpenAI and order the company to “halt the release of GPT models until necessary safeguards are established,” OpenAI, as an organization, “does suffer from diversity of perspectives.” Their focus on AGI, she explained, “have ignored current impact” of AI models and tools.
However, she disagreed with any debate about the size or diversity of the OpenAI board in the context of who gets to determine whether or not OpenAI has “attained” AGI — saying it distracts from discussions about whether their underlying mission and claim is even legitimate.
“This would shift the focus, and de facto legitimize the claims that AGI is possible,” she said.
But does OpenAI’s lack of a clear definition of AGI — or whether there will even be one AGI — skirt the issue? For example, an OpenAI blog post from February 2023 said “the first AGI will be just a point along the continuum of intelligence.” And in January 2023 LessWrong interview, CEO Sam Altman said that “the future I would like to see is where access to AI is super democratized, where there are several AGIs in the world that can help allow for multiple viewpoints and not have anyone get too powerful.” What OpenAI’s AGI mission really means for Microsoft Still, it’s hard to say what OpenAI’s vague definition of AGI will really mean for Microsoft — especially without having full details about the operating agreement between the two companies. For example, Casey said, OpenAI’s structure and relationship with Microsoft could lead to some “big dispute” if OpenAI is sincere about its non-profit mission.
“There are a few nonprofits that own for profits,” he pointed out — the most notable being the Hershey Trust.
“But they wholly own the for-profit. In that case, it is easy because there is no minority shareholder to object,” he explained. “But here Microsoft’s for-profit interests could directly conflict with the non-profit interest of the controlling entity.” The cap on profits is easy to implement, he added, but “the hard thing is what to do if meeting the maximum profit conflicts with the mission of the non-profit?” Casey added that “default rules would say that hitting the profit is the priority and the managers have to put that first (subject to broad discretion under the business judgment rule).” Perhaps, he continued, “Microsoft said, ‘Don’t worry, we are good either way. You don’t owe us any duties.’ That just doesn’t sound like the way Microsoft would negotiate.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,028 | 2,023 |
"Invisible AI watermarks won't stop bad actors. But they are a ‘really big deal’ for good ones | VentureBeat"
|
"https://venturebeat.com/ai/invisible-ai-watermarks-wont-stop-bad-actors-but-they-are-a-really-big-deal-for-good-ones"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Invisible AI watermarks won’t stop bad actors. But they are a ‘really big deal’ for good ones Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In an era of deepfakes, bot-generated books and AI images created in the style of famous artists, the promise of digital watermarks to identity AI-generated images and text has been tantalizing for the future of AI transparency.
Back in July, seven companies promised President Biden they would take concrete steps to enhance AI safety, including watermarking , while in August, Google DeepMind released a beta version of a new watermarking tool, SynthID, that embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification.
Thus far, however, digital watermarks — whether visible or invisible — are not sufficient to stop bad actors. In fact, Wired recently quoted a University of Maryland computer science professor, Soheil Feizi, who said “we don’t have any reliable watermarking at this point — we broke all of them.” Feizi and his fellow researchers examined how easy it is for bad actors to evade watermarking attempts. In addition to demonstrating how attackers might remove watermarks, they showed how it to add watermarks to human-created images, triggering false positives.
Digital watermarking can enable and support good actors But in a conversation with VentureBeat, Hugging Face computer scientist and AI ethics researcher Margaret Mitchell said that while digital watermarks may not stop bad actors, they are a “really big deal” for enabling and supporting good actors who want a sort of embedded ‘nutrition label’ for AI content.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When it comes to the ethics and values surrounding AI-generated images and text, she explained, one set of values is related to the concept of provenance. “You want to be able to have some sort of lineage of where things came from and how they evolved,” she said. “That’s useful in order to track content for consent credit and compensation. It’s also important in order to understand what the potential inputs for models are.” It’s this bucket of watermarking users that Mitchell said she gets “really excited” about. “I think that has really been lost in some of the recent rhetoric,” she said, explaining that there will always be ways AI technology doesn’t work well. But that doesn’t mean the technology as a whole is bad.
“For a subset of the users or those affected it won’t be the right tool, but for the vast majority it will be right — bad actors are a subset of users, and then a subset of users within that will be those that have the the technical know how to actually perturb the watermark.” New functions on Hugging Face allow anyone to provide provenance Mitchell highlighted new functions from Truepic , which provides authenticity infrastructure to the internet, on Hugging Face , an open-access AI platform for hosting machine learning (ML) models — that allow Hugging Face users to automatically add responsible provenance metadata to AI-generated images.
First, Truepic added content credentials from the Coalition for Content Provenance and Authenticity (C2PA) to open source models on Hugging Face, allowing anyone to generate and use transparent synthetic data. In addition, it created an experimental space to combine the provenance credentials with invisible watermarking using technology from Steg.AI , a provider of “sophisticated forensic watermarking solutions” that uses Light Field Messaging (LFM), a process of embedding, transmitting, and receiving hidden information in video that is displayed on a screen and captured by a handheld camera.
Consensus on promise of watermarking When asked if trying to tackle issues of provenance with watermarking tools feels like a drop in an ocean of AI-generated content, Mitchell laughed. “Welcome to ethics,” she said. “It’s always something good for one small use case and you build and iterate from there.” But one thing that is particularly exciting about watermarking as a tool, she explained, is that it is “something that both people focused on human values broadly in AI, and then AI Safety with a capital S, have agreed that this is critical with their realms.” Then, she added, interest in digital watermarking systems rose to the level of being a part of the White House voluntary commitments.
“So in terms of all the various things that various people think are worth prioritizing, there is consensus on watermarking — people actually care about this,” she said. “Compared to some of the other work I’ve been involved in, it doesn’t seem like a drop in the bucket at all. It seems like you’re starting to fill up buckets.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,029 | 2,023 |
"Cybersecurity industry responds to SEC charges against SolarWinds and former CISO | VentureBeat"
|
"https://venturebeat.com/security/cybersecurity-industry-responds-to-sec-charges-against-solarwinds-and-former-ciso"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cybersecurity industry responds to SEC charges against SolarWinds and former CISO Share on Facebook Share on X Share on LinkedIn Credit: REUTERS/Brendan McDermid Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The cybersecurity industry is reeling after the shocking news that the SEC has charged SolarWinds and its former CISO with fraud around the notorious SUNBURST attack.
A 68-page-long complaint filed Oct. 30 alleges that from at least October 2018 through Jan. 12, 2021, SolarWinds and its then security head Timothy G. Brown defrauded investors and customers through “misstatements, omissions and schemes that concealed both the company’s poor cybersecurity practices and its heightened — and increasing — cybersecurity risks.” SUNBURST — with which SolarWinds’ name is now synonymous — was one of the most significant cyberattacks in history because it infiltrated the software supply chain and wrought havoc on enterprises of all sizes, all over the world. The U.S. government was even affected, prompting stricter guidelines and requirements to protect the federal software supply chain.
The full ramifications of the attack are as yet unknown and will likely be felt for the foreseeable future.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The fraud charges come as the SEC ramps up cybersecurity accountability — most notably its new four-day disclosure requirement for public companies — and it could have dramatic implications far beyond the cybersecurity realm.
“The charges serve as a reminder to CISOs about the importance of ethical behavior and professional conduct,” said George Gerchow, faculty member at cybersecurity research and advisory firm IANS Research.
“It is crucial for CISOs to maintain a high level of integrity, adhere to ethical standards and prioritize the security and privacy of their organization’s data.” Internal doc says company ‘not very secure’ The Oklahoma-based SolarWinds offers network and infrastructure system management tools to hundreds of thousands of organizations globally.
Potentially as early as 2018, hackers gained access to the company’s network and deployed malicious code into its Orion IT monitoring system. Orion is considered to be a “crown jewel” asset, according to the SEC, that accounted for 45% of the company’s revenue in 2020.
The agency says that during the ensuing two-year attack, SolarWinds and Brown made “materially false and misleading statements and omissions” about cybersecurity risks and practices in several public disclosures, including a “security statement” on its website and reports filed with the SEC.
For instance, in Oct. 2018 — the same month SolarWinds conducted its Initial Public Offering (IPO) — Brown wrote in an internal presentation that SolarWinds’ “current state of security leaves us in a very vulnerable state for our critical assets.” Other presentations during that period referred to SolarWinds’ remote access setup as “not very secure” and that an exploiter could “basically do whatever without us detecting it until it’s too late,” which could lead to “major reputation and financial loss.” Furthermore, a Sept. 2020 internal document shared with Brown and others stated that “the volume of security issues being identified over the last month have [sic] outstripped the capacity of engineering teams to resolve.” “SolarWinds’ public statements about its cybersecurity practices and risks painted a starkly different picture from internal discussions and assessments,” the complaint alleges.
The SEC also reports that the company made an incomplete disclosure about the attack in a December 14, 2020 Form 8-K filing, after which its stock price dropped roughly 25% over the next two days and 35% by the end of the month.
In the years since, the company has struggled to rebuild its reputation, with leaders recently working on a rebrand and floating the idea of moving back to a private model.
In a blog post , CEO Sudhakar Ramakrishna said SolarWinds “vigorously opposes” the SEC action.
“How we responded to SUNBURST is exactly what the U.S. government seeks to encourage,” he said.
So, it is “alarming” that the SEC has filed what the company believes is a “misguided and improper enforcement action” that represents “a regressive set of views and actions inconsistent with the progress the industry needs to make and the government encourages.” SUNBURST only highlighted rampant security issues Experts emphasize the SEC isn’t targeting SolarWinds due to SUNBURST: The complaint says that false statements about security would have violated securities laws even if SolarWinds hadn’t been hacked.
“That they were targeted only served to highlight the issues,” said Williams.
Michael Isbitski, director of cybersecurity strategy at Sysdig , pointed to the many security gaps called out: remote access for unmanaged devices, threat modeling missteps, inadequate web application testing, inappropriate password management policies and weaker access controls.
While SolarWinds attested to following common security best practices — such as NIST Cybersecurity Framework, NIST Security and Privacy Controls for Information Systems and Organizations and Secure Development Lifecycle (SDL) — evidence seems to show that they had significant gaps in meeting all criteria for all applications and systems, said Isbitski. This created material issues that weren’t appropriately disclosed and misled investors.
“A key takeaway here is to pick a standard and ensure you’re following it universally,” he said.
The enduring ramifications of SUNBURST That’s not to say that SUNBURST didn’t dramatically change the cybersecurity industry.
“The SUNBURST attack has changed our industry in so many ways,” said Gerchow.
Notably, it has brought attention to the importance of supply chain security. “Organizations are now more aware of the potential risks associated with third-party software and are taking steps to enhance their security practices,” he said.
The attack also highlighted the need for continuous monitoring and threat detection, prompting organizations to invest in advanced tools and technologies. Finally, and perhaps most notably, it has caught the attention of regulators.
“This may result in stricter requirements for organizations to ensure the security of their supply chains,” said Gerchow.
SEC setting a new standard This case underscores the criticality of honesty around the state and maturity of cybersecurity programs, particularly for publicly traded companies, experts point out.
Relevant expertise, cybersecurity processes and history of security incidents must be disclosed under SEC cybersecurity disclosure rules, Isbitski said. These have existed in different forms for more than a decade, with the latest version becoming fully enforceable in December 2023.
Furthermore, being open and honest is simply good business practice. “Transparency is crucial in maintaining the trust of customers, partners and stakeholders,” said Gerchow.
When a breach occurs, it is important to inform those who may be affected so they can take necessary precautions and protect themselves, he emphasized. By being open about a breach, companies show a commitment to their customers’ security and demonstrate accountability.
Gerchow’s colleague Jake Williams, a former U.S. National Security Agency (NSA) hacker and IANC Research faculty member commented that “the SEC is setting a new standard for security disclosures with this lawsuit.” He cautioned: “Don’t be surprised to see that standard used in litigation if you make false, incomplete or misleading statements about security to customers or business partners.” Furthermore, Wells Notices — intents to charge — are typically issued to CEOs and CFOs, said Sivan Tehila, CEO of cybersecurity platform Onyxia.
But in this case, CISO Brown is explicitly included.
“This could mean new liabilities for cybersecurity executives moving forward,” said Tehila.
Keeping an eye on the SolarWinds case as it unfolds CISOs should keep a close eye on the case, cybersecurity experts advise.
For starters, it serves as a reminder of the potential legal and regulatory consequences that can arise from cybersecurity incidents , Gerchow said. Understanding these charges and the eventual outcome of the case can help security leaders assess potential risks they may face in similar situations and take proactive preventative measures.
“CISOs should analyze the specific allegations made by the SEC and evaluate if their own organization has similar vulnerabilities or shortcomings,” said Gerchow. “This can help them identify areas for improvement and strengthen their cybersecurity posture.” He advised that CISOs study SolarWinds’ incident response actions to assess their effectiveness. Examining it as a use case can help them enhance their own incident response plans, including communication strategies, containment measures and recovery processes. Just as importantly, security leaders should be reinforcing ethical behavior within their organizations.
Isbitski agreed, saying that companies and their leadership should follow the lawsuit as it plays out, “as this is one of the first battle tests of the final cybersecurity rules.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,030 | 2,023 |
"Skills mapping: Turning skills to workforce gold | VentureBeat"
|
"https://venturebeat.com/ai/skills-mapping-turning-skills-to-workforce-gold"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Skills mapping: Turning skills to workforce gold Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
I used to stand in front of a room and teach classes in person. It was the foundation on which academia was built. We’re all encouraged to get a four-year college degree, sit in a classroom and learn, and then we’re all omniscient and ready to conquer the world — right? Not exactly. Studies show that while a skill used to last 15 to 20 years, the shelf-life of any skill is now only 3 to 5 years.
The one-size-fits-all approach to learning is no longer going to cut it, even if the system we grew up in is telling us otherwise.
Learning can’t be one-and-done. It should happen over the course of our lifetimes. This requires a new level of agility to learn, unlearn and relearn multiple times throughout our careers and an entirely new operating model for businesses.
The mindset shift: From static learning to dynamic growth The processes and methodology underpinning human capital management as well as the set of assumptions behind them — namely, that people live in a static hierarchy and have defined roles, report to one individual and do one type of work over the course of their career — are broken. The world of work has changed in a profound way, and we can no longer live in an old paradigm.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! We also know that employees do not stand still or remain stagnant; their role in an organization constantly changes, given their personal and professional experiences along the way. Further, all employees are capable of contributing to their company in a bigger and broader way.
Companies don’t stand still, and the market doesn’t stand still — and therefore, jobs shouldn’t stand still. If leaders accept this new reality and believe in making people more dynamic, they will reap the benefits with better productivity, efficiency and employee experiences.
Research shows that internal talent mobility programs have a positive impact on employee retention, with a 60% reduction in attrition when a talent marketplace is used by employees.
This means we need to break our old assumptions and accept learning and development with a focus on skills.
Skills are the foundation for 21st-century learning Companies that move away from a static job architecture to a skills-based architecture can understand those skills needed to drive a business strategy forward and identify opportunities to grow and develop talent. Unfortunately, many don’t have the skills, strategy or technology in place to do so.
Companies struggle to capture the holistic view of their skills supply chain (the skills they have and the skills they need) and many lack the technology to automate the process of surfacing skills and delivering training and learning opportunities. Businesses that adopt a skills-centric operating model empowered by technology can dramatically increase their ability to manage the supply and demand of skills. Ultimately, this makes organizations massively more productive.
This is where skills mapping and intelligence come in.
Skills: Workforce gold In a nutshell, skills mapping is matching skills to roles, titles and the type of work that people are doing to help find, hire and grow talent. Skills mapping is even stronger when underpinned by artificial intelligence (AI) to map people with the right skills to the right projects and learning opportunities (both on-the-job or online courses) at the right time. This also helps ensure that talent decisions are based on data and insights, not biases or assumptions.
Another way to think about skills mapping is to think about skills as workforce gold and technology as hydraulics. Right now, companies are mining for gold (skills) across different systems or manually using spreadsheets and email, which makes data and insights hard to find. Applying hydraulics (technology) accelerates our ability to find and match skills, and ultimately point people to the right work and training opportunities to grow themselves and the business.
Skills mapping is a complex shift to accept, but once understood, it will be a massive uplift for all organizations. It will help employees continue to learn and grow while helping businesses execute their strategy.
Understanding the value of skills mapping and skills intelligence Let’s use the analogy of skills matching and intelligence to a math equation. In algebra, we try to solve for X — or the common denominator. In a job, we don’t have a common denominator. We put out a general job description which we know can’t live on or capture everything a person is doing.
If we use the X to represent a skill and we can tie intelligence to that skill, we can get much more targeted and specific. We can now, down to a common denominator (the skill), make a more personalized and customized offer to an employee and know exactly what work we can point them to.
This is game-changing for leaders at every level. From a C-suite perspective, building a scalable skills strategy will help employees evolve at the speed of the business while gathering critical data and insights that inform workforce planning. HR teams can use intelligent skills mapping to help foster a culture of continuous learning, delivering the right resources to employees at the right time. Managers can better understand their team’s skills to provide better coaching, and employees are given the personalized learning opportunities they need to be successful.
Skills are the new currency Most employees today won’t stay in a role for more than 3 to 5 years, and often this is because they crave new learning opportunities. At the same time, many organizations are grappling with a talent gap, whether due to the great resignation or the recession. They know they either need to build capabilities internally by upskilling or reskilling, or find new talent on the open market.
These factors make skills the new currency. It’s critical to understand who our employees are and give them clear pathways for growth to contribute. When leaders maximize the investment they make in employees, employees maximize their investment in the business in return.
To succeed in this new normal, businesses need to have equal pillars of people, processes, and technology. As leaders, we must deviate from the standard way of operating — it might be uncomfortable, but it is worth it.
When skills mapping is enacted, the business outcomes are huge. Journeys become more complex and personalized, employees have an opportunity to explore various careers at one organization, and ultimately the business continues to thrive.
Leaders, it’s time for you to decide: Do you want to remain in the status quo, or do you want to evolve to meet your employee and business needs? The choice is yours.
Kelley Steven-Waiss is the chief transformation officer at ServiceNow.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,031 | 2,023 |
"OpenAI announces GPT-4 Turbo, Assistants API at DevDay; aims to revolutionize AI apps | VentureBeat"
|
"https://venturebeat.com/ai/openai-announces-gpt-4-turbo-assistants-api-at-devday-aims-to-revolutionize-ai-apps"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI announces GPT-4 Turbo, Assistants API at DevDay; aims to revolutionize AI apps Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
At its annual DevDay event , artificial intelligence startup OpenAI unveiled a slew of new capabilities and pricing changes for its AI platform. The enhancements promise to make OpenAI’s technology more powerful, flexible and affordable for developers building real-world applications.
The star of the show was GPT-4 Turbo , an upgraded version of OpenAI’s large language model that can understand and generate human-like text. GPT-4 Turbo has double the context window at 128,000 tokens, allowing it to take in the equivalent of 300 pages of text at once. This expanded memory and reasoning allows more nuanced conversations and complex instructions.
OpenAI says GPT-4 Turbo is also 3x cheaper per token for input and 2x cheaper for output versus the previous GPT-4. For enterprise users, lower pricing means faster payback on AI investments. But it also lowers the barrier for startups and smaller teams to leverage advanced generative AI.
Assistants API unlocks custom AI agents The launch of Assistants API was another pivotal announcement. This toolset allows developers to build AI agents customized for specific use cases — anything from coding assistants to vacation planners to voice-controlled DJs.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Assistants can leverage capabilities like natural language conversations, executing functions, running code and retrieving external knowledge. The aim of the launch is to unlock a whole new level of intelligence in apps. Assistants can now learn users’ goals and automatically take actions to fulfill them.
Other notable upgrades: Vision, TTS, copyright Protection Other notable updates include integrating computer vision and text-to-speech into the platform.
DALL-E 3 , OpenAI’s photorealistic image generator, is now accessible directly through the API. OpenAI also rolled out a copyright protection program called Copyright Shield to protect customers against infringement claims when using general platform features.
With these improvements, OpenAI continues to rapidly iterate its developer platform. And pricing drops put the technology in reach of more companies looking to integrate next-gen AI.
Major business implications The new models and developer tools could have far-reaching implications for businesses across industries. By making AI more affordable and easier to implement, OpenAI could potentially disrupt the AI market and change the way businesses leverage AI.
New, more capable models like GPT-4 Turbo could enable businesses to create more sophisticated AI applications and services. Meanwhile, the Assistants API and multimodal capabilities could allow businesses to create more engaging and intuitive user experiences.
However, these new offerings could also present challenges for businesses , particularly in terms of data privacy and security. Businesses will need to ensure they have robust data governance policies in place to protect user data and comply with data protection regulations.
Overall, OpenAI’s new offerings represent an exciting step forward for the AI industry, and it will be interesting to see how businesses leverage these tools to innovate and create value in the coming years.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,032 | 2,023 |
"MosaicML launches MPT-7B-8K, a 7B-parameter open-source LLM | VentureBeat"
|
"https://venturebeat.com/ai/mosaicml-launches-mpt-7b-8k-a-7b-parameter-open-source-llm"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MosaicML launches MPT-7B-8K, a 7B-parameter open-source LLM with 8k context length Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
MosaicML has unveiled MPT-7B-8K , an open-source large language model (LLM) with 7 billion parameters and an 8k context length.
According to the company, the model is trained on the MosaicML platform and underwent a pretraining process commencing from the MPT-7B checkpoint. The pretraining phase was conducted using Nvidia H100s , with an additional three days of training on 256 H100s, incorporating an impressive 500 billion tokens of data.
Previously, MosaicML had made waves in the AI community with its release of MPT-30B , an open-source and commercially licensed decoder-based LLM. The company claimed it to be more powerful than GPT-3-175B, with only 17% of GPT-3’s parameters, equivalent to 30 billion.
MPT-30B surpassed GPT-3’s performance across various tasks and proved more efficient to train than models of similar sizes. For instance, LLaMA-30B required approximately 1.44 times more FLOPs budget than MPT-30B, while Falcon-40B had a 1.27 times higher FLOPs budget than MPT-30B.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! MosaicML claims that the new model MPT-7B-8K exhibits exceptional proficiency in document summarization and question-answering tasks compared to all previously released models.
The company said the model is specifically optimized for accelerated training and inference for quicker results. Moreover, it allows fine-tuning of domain-specific data within the MosaicML platform.
The company has also announced the availability of commercial-use licensing for MPT-7B-8k, highlighting its exceptional training on an extensive dataset comprising 1.5 trillion tokens, surpassing similar models like XGen, LLaMA, Pythia, OpenLLaMA and StableLM.
MosaicML claims that through the use of FlashAttention and FasterTransformer, the model excels in rapid training and inference while benefiting from the open-source training code available through the llm-foundry repository.
The company has released the model in three variations: MPT-7B-8k-Base: This decoder-style transformer is pretrained based on MPT-7B and further optimized with an extended sequence length of 8k. It undergoes additional training with 500 billion tokens, resulting in a substantial corpus of 1.5 trillion tokens encompassing text and code.
MPT-7B-8k-Instruct: This model is designed for long-form instruction tasks, including summarization and question-answering. It is crafted by fine-tuning MPT-7B-8k using carefully curated datasets.
MPT-7B-8k-Chat: This variant functions as a chatbot-like model, focusing on dialogue generation. It is created by finetuning MPT-7B-8k with approximately 1.5 billion tokens of chat data.
Mosaic asserts that MPT-7B-8k models exhibit comparable or superior performance to other currently available open-source models with an 8k context length, as confirmed by the company’s in-context learning evaluation harness.
The announcement coincides with Meta’s unveiling of the LLaMA 2 model , now available on Microsoft Azure. Unlike LLaMA 1, LLaMA 2 offers various model sizes, boasting 7, 13 and 70 billion parameters.
Meta asserts that these pre-trained models were trained on a vast dataset, 40% larger than that of LLaMA 1, with an expanded context length of two trillion tokens, twice the size of LLaMA 1. LLaMA 2 outperforms its predecessor according to Meta’s benchmarks.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,033 | 2,023 |
"LLaMA 2: How to access and use Meta's versatile open-source chatbot right now | VentureBeat"
|
"https://venturebeat.com/ai/llama-2-how-to-access-and-use-metas-versatile-open-source-chatbot-right-now"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages LLaMA 2: How to access and use Meta’s versatile open-source chatbot right now Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Facebook parent company Meta made waves in the artificial intelligence (AI) industry this week with the launch of LLaMA 2 , an open-source large language model (LLM) meant to challenge the restrictive practices by big tech competitors.
Unlike AI systems launched by Google, OpenAI and others that are closely guarded in proprietary models, Meta is freely releasing the code and data behind LLaMA 2 to enable researchers worldwide to build upon and improve the technology.
Meta’s CEO Mark Zuckerberg has been vocal about the importance of open-source software for stimulating innovation.
“Open-source drives innovation because it enables many more developers to build with new technology,” Zuckerberg said in a Facebook post.
“It also improves safety and security because when software is open, more people can scrutinize it to identify and fix potential issues.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! LLaMA 2’s open-source nature could very well lead to rapid advancements in AI, as developers worldwide can now access, analyze and build upon the foundation model. It’s a bold move that could democratize the rapidly advancing field of AI, providing developers with powerful tools to build innovative applications and solutions.
LLaMA 2 is an open challenge to OpenAI’s ChatGPT and Google’s Bard LLaMA 2 comes in three sizes: 7 billion, 13 billion and 70 billion parameters depending on the model you choose. In comparison, OpenAI’s GPT-3.5 series has up to 175 billion parameters, and Google’s Bard (based on LaMDA) has 137 billion parameters. OpenAI famously did not disclose the number of parameters in GPT-4 in its published research. The number of parameters in a model generally correlates with its performance and accuracy, but larger models require more computational resources and data to train.
The training method used for LLaMA 2 is also noteworthy and different from popular alternatives. The tool is trained using reinforcement learning from human feedback (RLHF), learning from the preferences and ratings of human AI trainers. In contrast, ChatGPT used supervised fine-tuning, learning from labeled data provided by human annotators.
How to Access and Use LLaMA 2 Given its open-source nature, there are numerous ways to interact with LLaMA 2. Here are just a few of the easiest ways to access and begin experimenting with LLaMA 2 right now: 1. Interact with the Chatbot Demo The easiest way to use LLaMA 2 is to visit llama2.ai , a chatbot model demo hosted by Andreessen Horowitz. You can ask the model questions on any topic you are interested in, or request creative content by using specific prompts. For example, you can ask “Who is the president of France?” or “Write a poem about love.” You can also change the chat mode between balanced, creative and precise to suit your preferences. This is the best way to get started and to begin stress-testing the new model.
2. Download the LLaMA 2 Code If you want to run LLaMA 2 on your own machine or modify the code, you can download it directly from Hugging Face , a leading platform for sharing AI models. You will need a Hugging Face account and the necessary libraries and dependencies to run the code. You can find the installation instructions and documentation on the LLaMA 2 repository.
3. Access through Microsoft Azure Another option to access LLaMA 2 is through Microsoft Azure , a cloud computing service that offers various AI solutions. You can find LLaMA 2 on the Azure AI model catalog , where you can browse, deploy and manage AI models. You will need an Azure account and subscription to use this service. This method is recommended for more advanced users.
4. Access through Amazon SageMaker JumpStart You can also experiment with and deploy LLaMA 2 via Amazon SageMaker JumpStart , a popular hub for algorithms, models and solutions. SageMaker JumpStart simplifies the process of building, training and deploying machine learning (ML) models with just a few clicks. You will need an Amazon Web Services account and subscription to use this service. This is another method that is recommended for advanced users and programmers.
5. Try a variant at llama.perplexity.ai Perplexity.ai is a web crawler that uses ML to generate general answers to your queries, then offers a series of website links. Llama.perplexity.ai combines the power of LLaMA 2 and Perplexity.ai to provide you general answers and relevant links to queries using the new model to power its answers. To use it, visit llama.perplexity.ai and type a query in the search box. You will see a short answer from LLaMA 2 followed by a list of links that you can explore further.
Shaping the future of large language models By launching LLaMA 2, Meta has taken a significant step in opening AI up to developers worldwide. As developers begin to customize and build upon this new model, we can expect to see a surge of innovative AI applications in the near future.
In the context of enterprise data, LLaMA 2 could unlock significant potential for businesses and organizations to develop custom AI solutions tailored to their specific needs. These could range from advanced chatbots to sophisticated data analysis tools, making LLaMA 2 a powerful tool in the enterprise AI toolbox.
Meta’s LLaMA 2 is not just an AI model, it’s a seismic shift in the AI landscape that could spark a new wave of innovation. As we begin using and experimenting with this powerful tool, we are reminded that in the world of AI, the only constant is change — and change has never looked so promising. Good luck experimenting! VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,034 | 2,023 |
"Generative AI: A new Gold Rush for software engineering innovation | VentureBeat"
|
"https://venturebeat.com/ai/generative-ai-a-new-gold-rush-for-software-engineering-innovation"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Generative AI: A new Gold Rush for software engineering innovation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
E=mc^2 is Einstein’s simple equation that changed the course of humanity by enabling both nuclear power and nuclear weapons. The generative AI boom has some similarities. It is not just the iPhone or the browser moment of our times; it’s much more than that.
For all the benefits that generative AI promises, voices are getting louder about the unintended societal effects of this technology. Some wonder if creative jobs will be the most in-demand over the next decade as software engineering becomes a commodity. Others worry about job losses which may necessitate reskilling in some cases. It is the first time in the history of humanity that white-collar jobs stand to be automated, potentially rendering expensive degrees and years of experience meaningless.
But should governments hit the brakes by imposing regulations or, instead, continue to improve this technology which is going to completely change how we think about work? Let’s explore: Generative AI: The new California Gold Rush The technological breakthrough that was expected in a decade or two is already here. Probably not even the creators of ChatGPT expected their creation to be this wildly successful so quickly.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The key difference here compared to some technology trends of the last decade is that the use cases here are real and enterprises have budgets already allocated. This is not a cool technology solution that is looking for a problem. This feels like the beginning of a new technological supercycle that will last decades or even longer.
>>Follow VentureBeat’s ongoing generative AI coverage<< For the longest time, data has been referred to as the new oil. With a large volume of exclusive data, enterprises can build competitive moats. To do this, the techniques to extract meaningful insights from large datasets have evolved over the last few decades from descriptive (e.g., “Tell me what happened”) to predictive (e.g., “What should I do to improve topline revenue?”).
Now, whether you use SQL-based analysis or spreadsheets or R/Stata software to complete this analysis, you were limited in terms of what was possible. But with generative AI, this data can be used to create entirely new reports, tables, code, images and videos, all in a matter of seconds. It is so powerful that it has taken the world by storm.
What’s the secret sauce? At the basic level, let’s look at the simple equation of a straight line: y=mx+c.
This is a simple 2D representation where m represents the slope of the curve and c represents the fixed number which is the point where the line intersects the y-axis. In the most fundamental terms, m and c represent the weights and biases, respectively, for an AI model.
Now let’s slowly expand this simple equation and think about how the human brain has neurons and synapses that work together to retrieve knowledge and make decisions. Representing the human brain would require a multi-dimensional space (called a vector) where infinite knowledge can be coded and stored for quick retrieval.
Imagine turning text management into a math problem: Vector embeddings Imagine if every piece of data (image, text, blog, etc.) could be represented by numbers. It is possible. All such data can be represented by something called a vector, which is just a collection of numbers. When you take all these words/sentences/paragraphs and turn them into vectors but also capture the relationships between different words, you get something called an embedding.
Once you’ve done that, you can basically turn search and classification into a math problem.
In such a multi-dimensional space, when we represent text as a mathematical vector representation, what we get is a clustering where words that are similar to each other in their meaning are in the same cluster. For example, in the screenshot above (taken from the Tensorflow embedding projector), words that are closest to the word “database” are clustered in the same region, which will make responding to a query that includes that word very easy. Embeddings can be used to create text classifiers and to empower semantic search.
Once you have a trained model, you can ask it to generate “the image of a cat flying through space in an astronaut suit” and it will generate that image in seconds. For this magic to work, large clusters of GPUs and CPUs run nonstop for weeks or months to process the data the size of the entire Wikipedia website or the entire public internet to turn it into a mathematical equation where each time new data is processed, the weights and biases of the model change a little bit. Such trained models, whether large or small, are already making employees more productive and sometimes eliminating the need to hire more people.
Competitive advantages Do you/did you watch Ted Lasso ? Single-handedly, the show has driven new customers to AppleTV. It illustrates that to win the competitive wars in the digital streaming business, you don’t need to produce 100 average shows; you need just one that is incredible. In the world of generative AI, this happened with OpenAI, which had nothing to lose as it kept iterating and launching innovative products like GPT-1/2/3 and DALL·E. Others with deeper pockets were probably more cautious and are now playing a catchup game. Microsoft CEO Satya Nadella famously asked about generative AI, “OpenAI built this with 250 people; why do we have Microsoft Research at all?” Once you have a trained model to which you can feed quality data, it builds a flywheel leading to a competitive advantage. More users get driven to the product, and as they use the product, they share data in the text prompts, which can be used to improve the model.
Once the flywheel above of data -> training -> fine-tuning -> training starts, it can act as a sustainable competitive differentiator for businesses. Over the last few years, there has been a maniacal focus from vendors, both small and large, on building ever-larger models for better performance. Why would you stop at a ten-billion-parameter model when you can train a massive general-purpose model with 500 billion parameters that can answer questions about any topic from any industry? There has been a realization recently that we might have hit the limit of productivity gains that can be achieved by the size of a model. For domain-specific use cases, you might be better off with a smaller model that is trained on highly specific data. An example of this would be BloombergGPT, a private model trained on financial data that only Bloomberg can access. It is a 50 billion-parameter language model that is trained on a huge dataset of financial articles, news, and other textual data they hold and can collect.
Independent evaluations of models have proved that there is no silver bullet, but the best model for an enterprise will be use-case specific. It may be large or small; it may be open-source or closed-source. In the comprehensive evaluation completed by Stanford using models from openAI, Cohere, Anthropic and others, it was found that smaller models may perform better than their larger counterparts. This affects the choices a company can make regarding starting to use generative AI, and there are multiple factors that decision-makers have to take into account: Complexity of operationalizing foundation models : Training a model is a process that is never “done.” It is a continuous process where a model’s weights and biases are updated each time a model goes through a process called fine-tuning.
Training and inference costs : There are several options available today which can each vary in cost based on the fine-tuning required: Train your own model from scratch. This is quite expensive as training a large language model (LLM) could cost as much as $10 million.
Use a public model from a large vendor. Here the API usage costs can add up rather quickly.
Fine-tune a smaller proprietary or open-source model. This has the cost of continuously updating the model.
In addition to training costs, it is important to realize that each time the model’s API is called, it increases the costs. For something simple like sending an email blast, if each email is customized using a model, it can increase the cost up to 10 times, thus negatively affecting the business’s gross margins.
Confidence in wrong information : Someone with the confidence of an LLM has the potential to go far in life with little effort! Since these outputs are probabilistic and not deterministic, once a question is asked, the model may make up an answer and appear very confident. This is called hallucination , and it is a major barrier to the adoption of LLMs in the enterprise.
Teams and skills: In talking to numerous data and AI leaders over the last few years, it became clear that team restructuring is required to manage the massive volume of data that companies deal with today. While use case-dependent to a large degree, the most efficient structure seems to be a central team that manages data which leads to both analytics and ML analytics. This structure works well not just for predictive AI but for generative AI as well.
Security and data privacy: It is so easy for employees to share critical pieces of code or proprietary information with an LLM, and once shared, the data can and will be used by the vendors to update their models. This means that the data can leave the secure walls of an enterprise, and this is a problem because, in addition to a company’s secrets, this data might include PII/PHI data, which can invite regulatory action.
Predictive AI vs. generative AI considerations: Teams have traditionally struggled to operationalize machine learning. A Gartner estimate was that only 50% of predictive models make it to production use cases after experimentation by data scientists. Generative AI, however, offers many advantages over predictive AI depending on use cases. The time-to-value is incredibly low. Without training or fine-tuning, several functions within different verticals can get value. Today you can generate code (including backend and frontend) for a basic web application in seconds. This used to take at least days or several hours for expert developers.
Future opportunities If you rewound to the year 2008, you would hear a lot of skepticism about the cloud. Would it ever make sense to move your apps and data from private or public data centers to the cloud, thereby losing fine-grained control? But the development of multi-cloud and DevOps technologies made it possible for enterprises to not only feel comfortable but accelerate their move to the cloud.
Generative AI today might be comparable to the cloud in 2008. It means a lot of innovative large companies are still to be founded. For founders, this is an enormous opportunity to create impactful products as the entire stack is currently getting built. A simple comparison can be seen below: Here are some problems that still need to be solved: Security for AI : Solving the problems of bad actors manipulating models’ weights or making it so that each piece of code that is written has a backdoor written into it. These attacks are so sophisticated that they are easy to miss, even when experts specifically look for them.
LLMOps : Integrating generative AI into daily workflows is still a complex challenge for organizations large and small. There is complexity regardless of whether you are chaining together open-source or proprietary LLMs. Then the question of orchestration, experimentation, observability and continuous integration also becomes important when things break. There will be a class of LLMOps tools needed to solve these emerging pain points.
AI agents and copilots for everything: An agent is basically your personal chef, EA and website builder all in one. Think of it as an orchestration layer that adds a layer of intelligence on top of LLMs. These systems can let AI out of its box. For a specified goal like: “create a website with a set of resources organized under legal, go-to-market, design templates and hiring that any founder would benefit from,” the agents would break it down into achievable tasks and then coordinate to achieve the objective.
Compliance and AI guardrails: Regulation is coming. It is just a matter of time before lawmakers around the world draft meaningful guardrails around this disruptive new technology. From training to inference to prompting, there will need to be new ways to safeguard sensitive information when using generative AI.
LLMs are already so good that software developers can generate 60-70% of code automatically using coding copilots. This number is only going to increase in the future. One thing to keep in mind though is that these models can only produce something that’s a derivative of what has already been done. AI can never replace the creativity and beauty of a human brain, which can think of ideas never thought before. So, the code poets who know how to build amazing technology over the weekend will find AI a pleasure to work with and in no way a threat to their careers.
Final thoughts Generative AI for the enterprise is a phenomenal opportunity for visionary founders to build the FAANG companies of tomorrow. This is still the first innings that is being played out. Large enterprises, SMBs and startups are all figuring out how to benefit from this innovative new technology. Like the California gold rush, it might be possible to build successful companies by selling picks and shovels if the perceived barrier to entry is too high.
Ashish Kakran is a principal at Thomvest Ventures.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,035 | 2,022 |
"AI chatbots offer a way to connect with and engage customers | VentureBeat"
|
"https://venturebeat.com/ai/ai-chatbots-offer-a-way-to-connect-with-and-engage-customers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community AI chatbots offer a way to connect with and engage customers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As companies grow, so does the amount of communication required between them and their customer base. Historically, the solution has been to add more live representatives to handle incoming questions and concerns. But additional human resources also mean added business costs. Technology presents us with an elegant solution: chatbots.
This solution might elicit a collective sigh from consumers who are used to outdated chatbots with limited functionality, stilted speech and a cold effect. That said, chatbots can lead to frustration, but they don’t have to! The case for using chatbots is clear. They can chat with multiple users simultaneously, providing needed information within seconds. As your first level of customer engagement and support, they provide an opportunity for responsiveness and efficient problem-solving. The key is using an advanced chatbot that is more like a warm and capable digital ally who is lively, engaging and has valuable answers and solutions in real-time. Fortunately, technology exists to make this a reality.
Chatbots as digital allies As online communication continues to grow, businesses must adapt to how they interact with customers. Research from McKinsey cites that 25% out of 2,400 business leaders surveyed stated that they increased artificial intelligence (AI) adoption due to the pandemic.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Breaking down communication barriers between customers and businesses with intelligent digital assistants can not only help companies grow, but can also give end-users more control over how they communicate with those entities. After all, text-based communication is an efficient way of bringing businesses and customers closer together.
Natural expression is key to customer satisfaction But chatbot interactions should be empowering, allowing customers to express themselves with businesses online in a way that feels simple, convenient and natural. They should make it easy to resolve queries and problems in ways that leave customers feeling fully satisfied and heard. In fact, more than half of the respondents in a recent study by Forrester identified more satisfied customers as a top benefit for personalized customer service interactions, such as chatbots.
It’s important to care not just about conducting business efficiently but also about helping others. Good customer relations matter. While chatbots empower companies to deliver an improved customer experience and good customer service, they don’t replace a human. Instead, they are here to help us in a way that feels natural, with rich and interactive messages. The results are undeniable: A Gartner report found that companies that use chatbots in their sales strategy can achieve up to 30% higher conversion rates.
AI chatbots: Improving customer experience As stated previously, companies have seen a 30% sales growth by using chatbots, but how do they actually find the solutions for customers? First, some chatbot basics. There are two kinds of chatbots: rule-based and AI chatbots.
Rule-based chatbots work on an if/then basis, meaning they are a bit like actors, delivering preprogrammed responses. So let’s say a customer asks to reset their password. The chatbot will pick up keywords like “reset” and “password,” delivering a response such as “Sure, I can help you with your password reset,” and then it will provide the needed instructions. While such chatbots can be programmed with personality, at the end of the day, they can easily fail to match questions to answers if, for instance, spellings are off or questions are asked in ways that avoid using established keywords. They are also unable to learn from past experiences and can’t pick up on context.
AI chatbots, on the other hand, leverage some newer technology to make them better conversationalists. That includes: Natural language processing , which helps chatbots understand how humans communicate to replicate manners of speech and understand the context of conversations, can deliver the correct response even if spelling mistakes and jargon are present.
Machine learning , which allows chatbots to identify patterns in user input, making the best decisions based on past interactions with users.
Sentiment analysis, which helps chatbots understand how end-users are feeling.
Like their rule-based counterparts, AI chatbots need to be trained with predefined responses, but that’s sort of like giving a growing child a basic vocabulary on which to build as they learn. As AI chatbots gain more practice, they can deliver a higher degree of sophistication.
Integrating chatbots with your other tech Moving past strictly conversational tech, let’s talk about another important functionality. Good chatbot products should integrate with popular messaging channels and tools, such as Facebook and WhatsApp. In other words, it’s important to meet customers where they are.
A versatile chatbot should not only integrate with popular messaging channels, but also with commonly used platforms, such as Shopify and Google Analytics.
Because even the best AI chatbots will have limitations, they should be able to perform warm transfers to live agents when it’s necessary, and create and pass on help tickets when they don’t have the answers, very much like a live agent might escalate a question they can’t answer. In addition, once an end-user’s question or issue has been resolved, an AI chatbot should wrap up the conversation in a natural way.
AI chatbots also present an opportunity for training live representatives. For example, chat transcripts allow live representatives to see where interactions go well, or where they could go better, allowing them the ability to engage in follow-up. These interactions allow agents to be better prepared for future situations, cutting down on time spent trying to find a solution.
In the previously mentioned Forrester study, out of 100 customer service decision-makers surveyed, 62% shared that more productive agents were a top benefit of implementing systems similar to AI chatbots.
One traditional entry barrier to using AI chatbots was the long development time because chatbots must be customized to unique company needs. But today, chatbot providers are making it easier to deploy chatbots with ready-to-use templates that can be customized to fit individual business needs, bringing chatbots online quickly using visual builders, with little or no coding required. And because chatbots let you integrate expertise from your entire team, you can be sure to deliver the answers end-users are looking for.
The business case for using an AI chatbot is undeniable. Today’s technology allows these digital assistants the ability to help companies connect to their consumer base and help end-users like never before. While deployment can seem daunting to small companies without tech teams, the barriers to entry are lower than ever. As the usage of AI chatbots grows across industries, AI chatbot platforms that use machine learning and have the most input data from which to learn will lead the pack.
Dariusz Zabrzeński is head of ChatBot.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,036 | 2,023 |
"5 ways enterprise leaders can use large language models to unlock new possibilities | VentureBeat"
|
"https://venturebeat.com/ai/5-ways-enterprise-leaders-can-use-large-language-models-to-unlock-new-possibilities"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 5 ways enterprise leaders can use large language models to unlock new possibilities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
It’s highly unlikely that you’ve missed the buzz surrounding generative AI, and specifically large language models (LLMs) like ChatGPT. In recent months, these have been hot topics everywhere, from social media to the news to everyday conversations, and we’ve only just begun to learn what generative AI could be capable of.
Generally speaking, gen AI refers to a category of machine learning (ML) techniques that can create content like images, music and text that closely resembles human-created content. LLMs, on the other hand, are neural networks with billions of parameters that have been trained on vast amounts of text data, which enables them to understand, process, and generate human-like language.
Together, these technologies offer a diverse range of applications that hold the potential to reshape diverse industries and amplify the quality of interactions between humans and machines. By exploring these applications, business owners and enterprise decision-makers can gain valuable inspiration, drive accelerated growth and achieve tangibly improved results through rapid prototyping. The added advantage of gen AI is that most of these applications require minimal expertise and do not require further model training.
Quick disclaimer: People often tend to associate gen AI exclusively with ChatGPT, but there are numerous models from other providers available, like Google’s T5, Meta’s Llama, TII’s Falcon, and Anthropic’s Claude. While most of the discussed applications in this article have made use of OpenAI’s ChatGPT , you can readily adapt and switch the underlying LLM to align with your specific compute budget, latency (how fast you need your model to generate completions — smaller models allow quicker loading and reduce inference latency), and downstream task.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 1. Connect LLMs to external data LLMs demonstrate impressive capabilities at many tasks right out of the box, such as translation and summarizing , without requiring initial customization. The reason they are so good at these generic tasks is that the underlying foundation model has been trained on large yet generic datasets. However, this competence might not seamlessly extend to domain-specific tasks including, for example, providing answers about your company’s annual report. This is where Retrieval Augmented Generation (RAG) comes into the picture.
RAG is a framework for building LLM-powered systems that make use of external data sources. RAG gives an LLM access to data it would not have seen during pre-training, but that is necessary to correctly provide relevant and accurate responses. RAG enables language models like ChatGPT to provide better answers to domain-specific questions by combining their natural language processing (NLP) abilities with external knowledge, mitigating instances of generating inaccurate information or “hallucinations.” It does so by: Retrieving relevant information from external knowledge sources, such as large-scale document collections, databases or the internet. The relevance is based on the semantic similarity (measured using, say, cosine similarity) to the user’s question.
Augmenting the retrieved information to the original question in the prompt (to provide a helpful context for answering the question) and passing it to the LLM so it can produce a more informed, contextually relevant, and accurate response.
This approach makes LLMs more versatile and useful across various domains and applications, including question-answering, content creation and interactive conversation with access to real-time data. Podurama, a podcast app, has leveraged similar techniques to build its AI-powered recommender chatbots. These bots adeptly suggest relevant shows based on user queries, drawing insights from podcast transcripts to refine their recommendations.
This approach is also valuable in crisis management.
PagerDuty , a SaaS incident response platform, uses LLMs to generate summaries of incidents using basic data such as title, severity or other factors, and augmenting it with internal Slack data , where responders discuss details and share troubleshooting updates to refine the quality of the summaries.
While RAG may appear intricate, the LangChain library offers developers the necessary tools to implement RAG and build sophisticated question-answering systems. (In many cases, you only need a single line of code to get started). LangChain is a powerful library that can augment and enhance the performance of the LLM at runtime by providing access to external data sources or connecting to existing APIs of other applications.
When combined with open-source LLMs (such as Llama 2 or BLOOM), RAG emerges as an exceptionally potent architecture for handling confidential documents. What’s particularly interesting is that LangChain boasts over 120 integrations (at the time of writing), enabling seamless functionality with structured data (SQL), unstructured content (PDFs), code snippets and even YouTube videos.
2. Connect LLMs to external applications Much like utilizing external data sources, LLMs can establish connections with external applications tailored to specific tasks. This is particularly valuable when a model occasionally produces inaccuracies due to outdated information. For example, when questioning the present Prime Minister of the UK, ChatGPT might continue to refer to Boris Johnson, even though he left office in late 2022. This limitation arises because the model’s knowledge is fixed at its pretraining period and doesn’t encompass post-training events like Rishi Sunak’s appointment.
To address such challenges, LLMs can be enhanced by integrating them with the external world through agents. These agents serve to mitigate the absence of internet access inherent in LLMs, allowing them to engage with tools like a weather API (for real-time weather data) or SerpAPI (for web searches). A notable example is Expedia’s chatbot, which guides users in discovering and reserving hotels, responding to queries about accommodations, and delivering personalized travel suggestions.
Another captivating application involves the automatic labeling of tweets in real-time with specific attributes such as sentiment, aggression and language. From a marketing and advertising perspective, an agent connecting to e-commerce tools can help the LLM recommend products or packages based on user interests and content.
3. Chaining LLMs LLMs are commonly used in isolation for most applications. However, recently LLM chaining has gained traction for complex applications. It involves linking multiple LLMs in sequence to perform more complex tasks. Each LLM specializes in a specific aspect, and they collaborate to generate comprehensive and refined outputs.
This approach has been applied in language translation, where LLMs are used successively to convert text from one language to another. Companies like Microsoft have proposed LLM chaining for translation services in the case of low-resource languages, enabling more accurate and context-aware translations of rare words.
This approach can offer several valuable use cases in other domains as well. For consumer-facing companies, LLM chaining can create a dynamic customer support experience that can enhance customer interactions, service quality, and operational efficiency.
For instance, the first LLM can triage customer inquiries and categorize them, passing them on to specialized LLMs for more accurate responses. In manufacturing, LLM chaining can be employed to optimize the end-to-end supply chain processes by chaining specialized LLMs for demand forecasting, inventory management, supplier selection and risk assessment.
4. Extracting entities using LLMs Prior to the emergence of LLMs , entity extraction relied on labor-intensive ML approaches involving data collection, labeling and complex model training. This process was cumbersome and resource-demanding. However, with LLMs, the paradigm has shifted. Now, entity extraction is simplified to a mere prompt, where users can effortlessly query the model to extract entities from text. More interestingly, when extracting entities from unstructured text like PDFs, you can even define a schema and attributes of interest within the prompt.
Potential examples include financial institutions which can utilize LLMs to extract crucial financial entities like company names, ticker symbols and financial figures from news articles, enabling timely and accurate market analysis. Similarly, it can be used by advertising/marketing agencies for managing their digital assets by employing LLM-driven entity extraction to categorize ad scripts, actors, locations and dates, facilitating efficient content indexing and asset reuse.
5. Enhancing transparency of LLMs with ReAct prompts While receiving direct responses from LLMs is undoubtedly valuable, the opaqueness of the black box approach often raises hesitations among users. Additionally, when confronted with an inaccurate response for a complex query, pinpointing the exact step of failure becomes challenging. A systematic breakdown of the process could greatly assist in the debugging process. This is precisely where the Reason and Act (ReAct) framework comes into play, offering a solution to these challenges.
ReAct emphasizes on step by step reasoning to make the LLM generate solutions like a human would. The goal is to make the model think through tasks like humans do and explain its reasoning using language. One can easily operationalize this approach as generating ReAct prompts is a straightforward task involving human annotators expressing their thoughts in natural language alongside the corresponding actions they’ve executed. With only a handful of such instances, the model learns to generalize well for new tasks.
Taking inspiration from this framework, many ed-tech companies are piloting tools to offer learners personalized assistance with coursework and assignment and instructors AI-powered lesson plans. To this end, Khan Academy developed Khanmigo, a chatbot designed to guide students through math problems and coding exercises. Instead of merely delivering answers upon request, Khanmigo encourages thoughtful problem-solving by walking students through the reasoning process. This approach not only helps prevent plagiarism but also empowers students to grasp concepts independently.
Conclusion While the debate may be ongoing about the potential for AI to replace humans in their roles or the eventual achievement of technological singularity (as predicted by the godfather of AI, Geoffrey Hinton), one thing remains certain: LLMs will undoubtedly play a pivotal role in expediting various tasks across a range of domains. They have the power to enhance efficiency, foster creativity and refine decision-making processes, all while simplifying complex tasks.
For professionals in various tech roles, such as data scientists, software developers and product owners, LLMs can offer valuable tools to streamline workflows, gather insights and unlock new possibilities.
Varshita Sher is a data scientist, a dedicated blogger and podcast curator , and leads the NLP and generative AI team at Haleon.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,037 | 2,023 |
"How generative AI is defining the future of identity access management | VentureBeat"
|
"https://venturebeat.com/security/how-generative-ai-is-defining-the-future-of-identity-access-management"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How generative AI is defining the future of identity access management Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Generative AI defines the future of identity access management ( IAM ) by improving outlier behavior analysis, increasing the accuracy of alerts and streamlining administrative tasks while guarding against new threats.
The majority ( 98% ) of security professionals believe AI and machine learning (ML) will be beneficial in fighting identity-based breaches and view it as a pivotal technology in unifying their many identity frameworks. Well over half (63%), predict AI’s leading use case will be greater accuracy in identifying outlier behavior. Another 56% believe AI will help improve the accuracy of alerts, and 52% believe AI will help streamline administrative tasks.
The Identity Defined Security Alliance’s recent report, 2023 Trends in Securing Digital Identities, also shows how security professionals are challenged to get diverse identity frameworks from multiple vendors and different architectures to provide consistent data and insights.
Generative AI shrinks attack surfaces and expands the market Insider threats and zombie credentials are two of the most challenging attack surfaces to detect and stop an intrusion or breach attempt. Expect to see the leading IAM providers adopt gen AI to create auto-deployed decoys, stepwise improvements to behavioral detection and response, gains in Asset Graph technology and fast-tracking improvements to their extended detection and response (XDR) platforms.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Every IAM provider has gen AI on their roadmap and is moving quickly to deliver new products that capitalize on its ability to provide contextual intelligence. Leading IAM providers include AWS , CrowdS t rike , Delinea , Ericom, ForgeRock, Ivanti, Google Cloud Identity , IBM Cloud Identity , Microsoft Azure Active Directory , Palo Alto Networks and Zscaler.
The more successful gen AI is in shrinking attack surfaces, the more its net effect will be to expand the market.
Gartner predicts the worldwide IAM market will increase from $16.1 billion in 2023 to $24.9 billion in 2027. Broader end-user spending for the worldwide information security and risk management market will grow to $186 billion in 2023, with a constant currency growth of 13.4%. The market will reach $289 billion in 2027, with a CAGR of 11.0% between 2022 to 2027.
Gen AI shows the potential to close gaps in cloud security, the fastest-growing information security and risk management market that Gartner tracks. Cloud security products and services are predicted to grow from $4.4 billion in 2022 to 12.8 billion in 2027, attaining a 23.5% compound annual growth rate (CAGR).
Application security is predicted to grow from $5.7 billion in revenue this year to $9.6 billion in 2027, attaining a 13.6% CAGR. Global spending on zero-trust security software and solutions will grow from $27.4 billion in 2022 to $60.7 billion by 2027 , attaining a CAGR of 17.3%.
Stepping up generative AI efforts in IAM IAM providers need to step up their efforts using gen AI to identify and defeat the increasing number of malware-free attacks, which are often combined with convincing social engineering tactics. Attackers using gen AI to create, launch and monitor malware-free intrusions accounted for 71% of all detections as indexed by the CrowdStrike Threat Graph.
The latest Falcon Overwatch Threat Hunting Report illustrates how attack strategies aim for identities first.
“A key finding from the report was that upwards of 60% of interactive intrusions observed by OverWatch involved the use of valid credentials, which continue to be abused by adversaries to facilitate initial access and lateral movement,” said Param Singh, VP for Falcon OverWatch at CrowdStrike.
“Identity is where security is going and will revolve around going forward because there’s just so much more rich data there,” Ariel Tseitlin, a partner at Scale Venture Partners , told VentureBeat earlier this year.
IAM jumped from eighth place to second in this year’s investment priorities ranking, reflecting increasing market concerns about identity security in multicloud tech stacks.
In a recent series of interviews, IAM providers and the CISOs they serve told VentureBeat what they’re most interested is seeing how gen AI can help close the gaps their organizations face in achieving identity-first security. IAM providers are trying to solve the gaps between identity and endpoint security , relying on gen AI and training models to bridge that gap with more contextual intelligence.
Where IAM product leaders are focusing gen AI CISOs have consistently told VentureBeat that stopping an insider threat worries them and their teams the most. Employees with legitimate IDs — some with access credentials and a few with admin rights — are trusted and move freely through infrastructure to do their jobs.
Monitoring network activities and identities won’t catch a breach using stolen credentials or an insider attack. Additionally, attackers often know the networks they’re attacking better than the admins running them, and the threat becomes even more severe.
VentureBeat spoke with product leaders responsible for the next generation of IAM systems to get their thoughts on solving this, and here are their observations.
Auditing all access credentials in real-time to verify access privileges by resource DropBox, Box and Microsoft Sharepoint have years of intellectual property, customer records and transaction information exposed because credentials have never been audited or revoked. Product leaders across IAM providers say they see this often in their customers’ networks, and it’s common for breaches to happen. No system catches them because legitimate credentials were used.
Nearly half ( 45% ) of enterprises suspect former employees and contractors still have active access to company systems and files, according to a recent study by Ivanti.
During an interview with VentureBeat, Srinivas Mukkamala, Ivanti CPO, said that “large organizations often fail to account for the huge ecosystem of apps, platforms and third-party services that grant access well past an employee’s termination.” Mukkamala continued: “A shockingly large number of security professionals — and even leadership-level executives — still have access to former employers’ systems and data.” Behavioral analysis for anomaly detection and response Every IAM provider has their anomaly detection solution currently available or in their second generation of improving it with gen AI. It’s a strong use case for the technology, as it can identify unusual access patterns or potential breaches by analyzing large datasets in real-time, significantly improving detection.
IAM product leaders say their roadmaps reflect broadening the use of gen AI-based behavioral analysis for fraud detection, endpoint security, server and data center monitoring and more. Leading providers include CrowdStrike, CyberArk, Ivanti, Microsoft, Thales , Ping Identity and others.
Identifying, isolating and stopping insider threats Every IAM provider that VentureBeat has had briefings with has an insider threat solution already available or on their roadmap. Their goal is to use gen AI to fast-track insider threat solutions to increase the accuracy and reliability of alerts while sending out decoy containers, shares and assets that an inside attacker would try to breach.
IAM product managers often visit their customers and spend a day in Security Operations Centers (SOC) to see how alert workflows can be improved, especially in insider threats.
According to one leading provider, it’s a very effective technique, and they’re productizing what they’ve learned. Given this high priority to the IAM provider community, it’s reasonable to assume there will be acquisitions in this area in 2024. For instance, in 2022, CrowdStrike acquired Reposify to strengthen their external attack surface management platform on Falcon, announcing that the core technology would also help their customer stop internal attacks.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,038 | 2,023 |
"With 'GitHub for data,' Gable.ai wants to connect software engineers and ML developers | VentureBeat"
|
"https://venturebeat.com/ai/with-github-for-data-gable-ai-wants-to-connect-software-engineers-and-ml-developers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages With ‘GitHub for data,’ Gable.ai wants to connect software engineers and ML developers Share on Facebook Share on X Share on LinkedIn Gable.ai co-founders (L to R) Adrian Kreuziger (CTO), Chad Sanderson (CEO) and Daniel Dicker (Founding Engineer) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
AI applications are booming. But to keep them from breaking, the data flowing into those apps needs to be high-quality — that is, reliable, complete and accurate.
That’s the problem Gable.ai is poised to solve as the Seattle-based startup launches out of stealth today with $7 million in seed funding. It calls its offering the first data collaboration platform that allows software and data/ML developers to iteratively, build and manage high-quality data assets, but investors have taken to calling it “GitHub for data” — one that other data companies like Kaggle and Hex are investing in.
“GitHub is actually affecting culture — it’s helping software engineers from all around the company communicate with each other much more effectively,” said Chad Sanderson, CEO and co-founder of Gable.ai. “But that doesn’t exist for data at all.” Gable.ai’s platform allows data producers and data consumers to work together, he told VentureBeat. It helps software and data developers prevent breaking changes to critical data workflows within their existing data infrastructure. The platform features data asset recognition by connecting data sources; data contract creation to establish data asset owners and set meaningful constraints; and data contract enforcement via continuous integration/continuous deployment within GitHub.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Founders led data department at Convoy Before founding Gable.ai, Sanderson and his co-founders, Adrian Kreuziger and Daniel Dicker, led the data department at Convoy, the $4 billion digital freight network that move thousands of truckloads around the country each day through an optimized, connected network of carriers. Complex data came in fast and furiously, about shipments, shippers, facilities, carriers, trucks, contracts and prices.
While the company had the modern data stack, using the latest and greatest technologies, no one had any trust in the data — there were constant data quality issues, outages for valuable models, and billions of rows of data could not be used.
“When our data science team and the analytics team were trying to understand even simple questions like ‘How many shipments did we do over the past 30 days?’, all of that complexity made it almost impossible to answer that question,” Sanderson said. “And it was the same problem in machine learning — the models were very, very sensitive and the data scientist needed to figure out exactly what data from this very complex system needed to go into that model. When the data quality was wrong, when something suddenly changed, all these sensitive models started to break down, and all the predictions that they made turned out to be wrong.” Ultimately, he explained, the problem was the communication gap between software engineers and ML developers. “Once we helped bridge that gap, we saw the improvement of data quality exponentially almost immediately,” he said.
In order to scale AI, solving communication problems around changes to data is essential, Sanderson emphasized.
“If you don’t have a change management system for your data, you will not be able to scale AI — you just can’t,” he explained. “The way the Googles and Metas and Amazons solved this problem is throwing bodies at the problem. When a new machine learning model is shipped, there need to be two, three, four data engineers in the room.” But at a company like Convoy, he explained, “we didn’t have the ability to do that. Our data engineering team was six people.” A new part of the data stack Gable.ai’s data contracts are an entirely new category Gable.ai has been able to establish as an emerging data primitive — that is, a basic data type. In the last few months, Sanderson has built the “ Data Quality Camp ,” a Slack community of 8,000+ engaged data practitioners around these new concepts.
These concepts are meant to mark a significant step towards reshaping the data landscape, becoming a new part of a company’s data stack, said Apoorva Pandhi, managing director at Zetta Venture Partners, which led the funding round.
“All the founders of successful data companies, whether it’s dbt Labs, Monte Carlo, Hex, Kaggle, Hightouch, Great Expectations, they’ve all invested in the company and endorsed the fact that this is an integral part of the data stack,” he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,039 | 2,023 |
"Report: Enterprise investment in generative AI shockingly low, while traditional AI is thriving | VentureBeat"
|
"https://venturebeat.com/ai/report-enterprise-investment-in-generative-ai-shockingly-low-while-traditional-ai-is-thriving"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Enterprise investment in generative AI shockingly low, while traditional AI is thriving Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Generative AI is all anyone can talk about. It is a breakthrough technology with transformational promises across numerous domains — even human life itself.
And while 2023 was undoubtedly the year that gen AI had its breakout, that has largely been hype, according to a Menlo Ventures report shared exclusively with VentureBeat.
Gen AI still accounts for a “relatively paltry” amount of enterprise cloud spend — less than 1%. Traditional AI spend, on the other hand, comprises 18% of the $400 billion cloud market.
“A lot of people thought generative AI would rapidly take over the world,” Derek Xiao, investor with Menlo, told VentureBeat. “AI is a fundamental step forward. But the reality is that this takes time, especially in the enterprise.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Spend in traditional AI increasing Some projections put the gen AI market at $76.8 billion by 2030, representing a compound annual growth rate (CAGR) of 31.5% over 2023. Others say the technology will create at least $450 billion in the enterprise market across 12 verticals over the next 7 years.
While ChatGPT has dominated boardroom discussions — not to mention around the cooler and dining room tables — since its debut in November 2022, half of the enterprises polled in Menlo’s State of AI in the Enterprise report had implemented some form of AI before 2023.
In fact, the number of enterprises using AI grew by 7% — from 48% to 55% — and AI spend grew roughly 8% on average. Of any department, product engineering teams spends the most on AI.
Still, Menlo’s research indicates that enterprises have strong trepidations around gen AI.
“We thought generative AI was going to be this overnight success story,” Naomi Ionita, Menlo partner, told VentureBeat. But 2023 was “a year of experimentation and tire-kicking.” Looking ahead, “2024 will be the hard work of implementing generative AI,” said Xiao.
Concerns around generative AI adoption Leaders at large-scale enterprises should find a sense of comfort in these findings and recognize that moving slowly is OK, Menlo partner Tim Tully told VentureBeat.
“The smart folks are taking their time,” he said, noting that the rapidly evolving nature of gen AI is leading to a tentativeness to adopt. Also, in many cases “the dollars aren’t there.” “These are expensive decisions to make,” he said.
As has been the case with other transformative technologies — such as the cloud — adoption will continue to be measured, Menlo predicts.
Barriers continue to revolve around unproven ROI and the “last mile problem,” said Ionita. Other concerns include data privacy, shortage of AI talent, lack of organizational bandwidth, compatibility with existing infrastructure and limited explainability and customizability.
Menlo reports that enterprise solutions “have yet to deliver on their promise of meaningful transformation.” They have failed to create new workflows and behaviors and productivity gains feel limited. Buyers will continue to remain skeptical until they can see true value.
Also, in this market, “it’s harder than ever to get past the CFO,” said Ionita. “There are real barriers to overcome, the promise is there, but when we get down to brass tacks, how do we get it into production?” However, early adopters of gen AI are seeing significant gains when it comes to using their data and cutting “mundane, painful workflows.” “It’s meeting the user in ways we were not able to do before,” said Ionita.
Tully noted that users can create “really remarkable tools” in just 20 minutes (or less).
“It’s changing workflows,” he said. “It will replace teams, make people’s jobs easier, make people more successful. There is real value and revenue being created.” Opportunities both horizontal and vertical As the gen AI market continues to grow, Menlo sees great opportunities for startups in both vertical (industry-specific) and horizontal (more generalized) applications.
Ionita pointed out that the AI world will be hybrid: Many enterprises are already using more than one foundation platform and smaller models will be used for different, specialized use cases.
“When generative AI is introduced, industry-specific tools gain superpowers,” the report states.
For example, marketers have embraced the video content creation tool Synthesia while the legal world is increasingly leveraging Harvey to perform contract analysis and ensure regulatory compliance. Other specialized startups include Greenlite for finance, Abridge for healthcare and Higharc for architecture.
Meanwhile, horizontal AI tools help to automate manual tasks and workflows. Menlo also anticipates a rise of AI agents that can “think and act independently.” These sophisticated tools will be able to, for example, handle emails, calendars and note taking, and integrate into department and domain-specific workflows.
“Giving people their time back is an obvious value,” said Ionita, noting that the average employee is working across a “patchwork quilt” of tools.
Going forward, “AI will lose its novelty and become an unsurprising, if not expected, collaborator throughout the workday,” the report states.
Standardizing the modern AI stack Menlo, which has invested in Anthropic and Pinecone, found that enterprises invested $1.1 billion in the modern AI stack this year, making it the largest new market in the gen AI domain.
Buyers report that 35% of their infrastructure dollars go to foundation models such as OpenAI and Anthropic. These closed-source models continue to dominate, comprising upwards of 85% of models in production.
Furthermore, most models are off-the-shelf; only 10% of enterprises pre-train their models.
Most enterprises adopt multiple models for higher controllability and lower costs, and 96% of spend is on inference. Prompt engineering is the most popular customization method, while human review is the most popular evaluation method.
Also, retrieval-augmented generation (RAG) is becoming standard. This framework augments large language models (LLMs) with information from external knowledge bases to overcome the limitations of fixed datasets and generate up-to-date, contextually relevant responses.
Of the enterprises surveyed by Menlo, 31% were using this approach, while 19% used fine-tuning methods, 18% were implementing adapters and 13% were incorporating reinforcement learning through human feedback (RLHF).
While the first half of the year was “sort of the wild west, under constant construction and revision,” as Xiao described it, the industry is beginning to converge around core components and standard practices.
Still, the modern AI stack is by no means standardized. According to Menlo, this offers opportunities for startups in offering service remote environments to run and deploy models; extract, transform and load (ETL) that handle data pipeline creation; and data loss prevention, content governance and threat detection and response (to name a few).
Ultimately, startups should not be looking to compete, said Xiao; they should focus on tools offering new workflows, next-generation reasoning, chain-of-thought and proprietary data analysis.
It’s not enough to just be “ChatGPT wrapper,” he said. “It’s really about the ability to create new markets where incumbents are not. This is a warning to startups that differentiation really matters.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,040 | 2,023 |
"Gartner: Generative AI will be everywhere, so strategize now | VentureBeat"
|
"https://venturebeat.com/ai/gartner-generative-ai-will-be-everywhere-so-strategize-now"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gartner: Generative AI will be everywhere, so strategize now Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The human-machine relationship is dynamic and evolving. Generative AI, in particular, is set to completely transform enterprise processes, decision-making, strategy and other elements that have yet to be considered.
For this reason, AI adoption should no longer be considered an IT initiative, but an enterprise initiative. Furthermore, to keep pace and take full advantage, executives must prioritize their AI ambitions and AI-ready scenarios for the next 12 to 24 months.
Gartner analysts offered this guidance — as well as a slew of other stats and predictions, most involving generative AI — at this week’s IT Symposium/Xpo in which wraps tomorrow.
“Generative AI is not just a technology or business trend — it is a profound shift in how humans and machines interact,” Gartner distinguished VP analyst Mary Mesaglio said in an opening keynote. “We are moving from what machines can do for us to what machines can be for us.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The year generative AI becomes democratized Nearly three-quarters (73%) of CIOs polled by Gartner said their enterprise will increase funding for AI/ML in 2024. Similarly, 80% said their organizations are planning on full gen AI adoption within the next three years.
This strategizing, along with the confluence of massively pretrained models, cloud computing and open source, will make 2024 the year that gen AI becomes democratized. Boldly, Gartner predicts that by 2025, the technology will be a workforce partner for 90% of organizations globally.
In turn, this will lead to the need for AI Trust, Risk and Security Management (TRiSM), which will provide tooling for ModelOps, proactive data protection, AI-specific security, monitoring for data and model drift and risk controls for both inputs and outputs, according to Gartner.
The firm also predicts a rise in machine customers (‘custobots’) that can autonomously negotiate and purchase goods and services. In fact, by 2028, 15 billion connected products will have the potential to behave as customers, and this will eventually become more significant than the arrival of digital commerce, Gartner asserts.
As Gartner distinguished VP analyst Don Scheibenreif posited in a virtual press briefing: “What happens when your best customers are not human?” Enterprises must be thinking about how that will impact their sales, marketing, HR and other efforts.
The coming year will also bring an increase in AI-augmented development ; continuous threat exposure management; sustainable technology; platform engineering; and industry cloud platforms that address specific outcomes by combining SaaS, PaaS and IaaS.
Everyday AI, game-changing AI There are two emerging types of AI in enterprise, Mesaglio said in the virtual press session: everyday AI and game-changing AI.
“Everyday AI is your productivity partner,” she said. “It enables workers to do what they already do faster and more efficiently.” Ultimately, though, it will go from “dazzling to ordinary with outrageous speed,” she said. Everyone will have access to the same tools, so there will be no sustainable competitive advantage — meaning that everyday AI is the new table stakes.
Game-changing AI, meanwhile, is a “creativity partner,” said Mesaglio. It doesn’t just make people faster or better, it creates new results, products and services, “or it creates new ways to create new results.” “With game-changing AI, machines will disrupt business models and entire industries,” she said.
Establishing AI ambition, readiness In defining their ambitions with AI, CIOs and other members of the C-suite should examine opportunities and risks in the back office, the front office, new products and services and new core capabilities, according to Scheibenreif.
In moving towards AI-readiness , enterprises should establish “lighthouse principles” that align with organizational values, he advised — and the CEO should set the tone in this area.
“They should help drive the values for the organization,” he said, “and the application of AI and the human-machine relationship should emanate from those values.” Another critical element is to make AI data-ready — meaning it is secure, enriched, fair, accurate and governed by lighthouse principles. Finally, enterprises should implement AI-ready security, preparing themselves for new attack vectors and creating an acceptable use policy.
Ultimately, Scheibenreif pointed out that “generative AI is not everything, there’s a whole bunch of technologies that are connected to it.” As humans work more closely with those technologies, we’ll gain a better understanding of “how we interact with machines and what they can do for us,” he said.
Don’t just focus on the ‘tyranny of the quarter’ In implementing new technologies, enterprises can tend to be a bit short-sighted — take the frenzied race to digital transformation over the last few years, for example.
“Organizations were saying ‘We just want to be digital,’” said Mesaglio. “Digital is never an outcome. It’s only a means to an outcome. The outcome is something that’s working.” She emphasized the importance of being intentional and having meaningful conversations about the kinds of relationships people want to have with machines.
“Yes, there are ROI considerations,” she said. “Yes, there are productivity considerations. Yes, there are technological considerations. How do we make stuff work together?” Many enterprises make mistakes in looking exclusively at productivity gains and “focusing on the tyranny of the quarter,” agreed Gartner distinguished VP analyst Erick Brethenoux.
“We call that within boundaries,” he said.
Innovators push and break boundaries when they explore and build new, innovative products and services. What he called “the fringe” is where breakthroughs are made.
“And 3% [of organizations] will be dedicated to that,” he said. “And it’s fun to do.” The rise of decision intelligence The new wave of AI application within the enterprise is what Brethenoux called decision intelligence. And in order to aid strategic decision-making that is actionable and explainable, machines need to interact properly and efficiently.
While anthropomorphism can often lead to fear and skepticism, in this case the humanizing of machines can be helpful, he contended. While machines don’t have sentience — and that they will “get very, very, very, very close but won’t reach singularity” — an anthropomorphic interface can help us better relate to them.
“It has a human-like voice, it can answer my questions, it can interact with me,” he said. “There’s a double-sided thing to be very careful not to push it too far, but at the same time exploit it to allow that direct interaction.” The art of the question As gen AI becomes ever more pervasive throughout enterprise, prompt engineering will be a critical skill, Brethenoux noted.
“Engineering is a very important part of what’s coming,” he said.
He pointed out that “answers are less important than questions,” and that humans must know how to correctly question technology so that it provides useful answers.
“So the way you ask questions is important,” he said. And it’s typically not a technical question — it’s more often a business question or a process question, which requires both technology and content and domain expertise.
This doesn’t necessarily necessitate new hires, he emphasized. Enterprise leaders should look at their existing technology and business experts and invest in upskilling them.
“You already have technology experts,” he said, “you have people working together, they know your business problems.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,041 | 2,023 |
"Forget ChatGPT, why Llama and open source AI win 2023 | VentureBeat"
|
"https://venturebeat.com/ai/forget-chatgpt-why-llama-and-open-source-ai-win-2023"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Forget ChatGPT, why Llama and open source AI win 2023 Share on Facebook Share on X Share on LinkedIn Image created with DALL-E 3 for VentureBeat Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Could a furry camelid take the 2023 crown for the biggest AI story of the year? If we’re talking about Llama , Meta’s large language model that took the AI research world by storm in February — followed by the commercial Llama 2 in July and Code Llama in August — I would argue that the answer is… (writer takes a moment to duck) yes.
I can almost see readers getting ready to pounce.
“What? Come on — of course , ChatGPT was the biggest AI story of 2023!” I can hear the crowds yelling. “OpenAI’s ChatGPT, which launched on November 30, 2022, and reached 100 million users by February? ChatGPT, which brought generative AI into popular culture? It’s the bigger story by far!” Hang on — hear me out. In the humble opinion of this AI reporter, ChatGPT was and is, naturally, a generative AI game-changer. It was, as Forrester analyst Rowan Curran told me, “the spark that set off the fire around generative AI.” But starting in February of this year, when Meta released Llama, the first major free ‘open source’ Large Language Model (LLM) (Llama and Llama 2 are not fully open by traditional license definitions), open source AI began to have a moment — and a red-hot debate — that has not ebbed all year long. That is even as other Big Tech firms, LLM companies and policymakers have questioned the safety and security of AI models with open access to source code and model weights, and the high costs of compute have led to struggles across the ecosystem.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Meta, the open-source AI community has fine-tuned and released over 7,000 Llama derivatives on the Hugging Face platform since the model’s release, including a veritable animal farm of popular offspring including Koala, Vicuna, Alpaca, Dolly and RedPajama. There are many other open source models, including Mistral , Hugging Face, and Falcon , but Llama was the first that had the data and resources of a Big Tech company like Meta supporting it.
You could consider ChatGPT the equivalent of B arbie, 2023’s biggest blockbuster movie. But Llama and its open-source AI cohort are more like the Marvel Universe , with its endless spinoffs and offshoots that have the cumulative power to offer the biggest long-term impact on the AI landscape.
This will lead to “more real-world, impactful gen AI applications and cementing the open-source foundations of gen AI applications going forward,” Kjell Carlsson, head of data science strategy and evangelism at Domino Data Lab , told me.
Open-source AI will have the biggest long-term impact The era of closed, proprietary models began, in a sense, with ChatGPT. OpenAI launched in 2015 as a more open-sourced, open-research company. But in 2023, OpenAI co-founder and chief scientist Ilya Sutskever told The Verge it was a mistake to share their research , citing competitive and safety concerns.
Meta’s chief AI scientist Yann LeCun, on the other hand, pushed for Llama 2 to be released with a commercial license along with the model weights. “I advocated for this internally,” he said at the AI Native conference in September. “I thought it was inevitable, because large language models are going to become a basic infrastructure that everybody is going to use, it has to be open.” Carlsson, to be fair, considers my ChatGPT vs. Llama argument to be an apples-to-oranges comparison. Llama 2 is the game-changing model, he explained, because it is open-source, licensed for commercial use, can be fine-tuned, can be run on-premises, and is small enough to be operationalized at scale.
But ChatGPT, he said, is “the game-changing experience that brought the power of LLMs to the public consciousness and, most importantly, business leadership.” Yet as a model, he maintained, GPT 3.5 and 4 that power ChatGPT suffer “because they should not, except in exceptional circumstances, be used for anything beyond a PoC [proof of concept].” Matt Shumer, CEO of Otherside AI, which developed Hyperwrite , pointed out that Llama likely would not have had the reception or influence it did if ChatGPT didn’t happen in the first place. But he agreed that Llama’s effects will be felt for years: “There are likely hundreds of companies that have gotten started over the last year or so that would not have been possible without Llama and everything that came after,” he said.
And Sridhar Ramaswamy, the former Neeva CEO who became SVP of data cloud company Snowflake after the company acquired his company, said “Llama 2 is 100% a game-changer — it is the first truly capable open source AI model.” ChatGPT had appeared to signal an LLM repeat of what happened with cloud, he said: “There would be three companies with capable models, and if you want to do anything you would have to pay them.” Instead, Meta released Llama.
Early Llama leak led to a flurry of open-source LLMs Launched in February, the first Llama model stood out because it came in several sizes, from 7 billion parameters to 65 billion parameters — Llama’s developers reported that the 13B parameter model’s performance on most NLP benchmarks exceeded that of the much larger GPT-3 (with 175B parameters) and that the largest model was competitive with state of the art models such as PaLM and Chinchilla. Meta made Llama’s model weights available for academics and researchers on a case-by-case basis — including Stanford for its Alpaca project.
But the Llama weights were subsequently leaked on 4chan. This allowed developers around the world to fully access a GPT-level LLM for the first time — leading to a flurry of new derivatives. Then in July, Meta released Llama 2 free to companies for commercial use, and Microsoft made Llama 2 available on its Azure cloud-computing service.
Those efforts came at a key moment when Congress began to talk about regulating artificial intelligence — in June, two U.S. Senators sent a letter to Meta CEO Mark Zuckerberg that questioned the Llama leak, saying they were concerned about the “potential for its misuse in spam, fraud, malware, privacy violations, harassment, and other wrongdoing and harms.” But Meta consistently doubled down on its commitment to open-source AI: In an internal all-hands meeting in June, for example, Zuckerberg said Meta was building generative AI into all of its products and reaffirmed the company’s commitment to an “open science-based approach” to AI research.
Meta has long been a champion of open research More than any other Big Tech company, Meta has long been a champion of open research — including, notably, creating an open-source ecosystem around the PyTo rch framework. As 2023 draws to a close, Meta will celebrate the 10th anniversary of FAIR (Fundamental AI Research), which was created “to advance the state of the art of AI through open research for the benefit of all.” Ten years ago, on December 9, 2013, Facebook announced that NYU Professor Yann LeCun would lead FAIR.
In an in-person interview with VentureBeat at Meta’s New York office, Joelle Pineau, VP of AI research at Meta, recalled that she joined Meta in 2017 because of FAIR’s commitment to open research and transparency.
“The reason I came there without interviewing anywhere else is because of the commitment to open science,” she said. “It’s the reason why many of our researchers are here. It’s part of the DNA of the organization.” But the reason to do open research has changed, she added. “I would say in 2017, the main motivation was about the quality of the research and setting the bar higher,” she said. “What is completely new in the last year is how much this is a motor for the productivity of the whole ecosystem, the number of startups who come up and are just so glad that they have an alternative model.” But, she added, every Meta release is a one-off. “We’re not committing to releasing everything [open] all the time, under any condition,” she said. “Every release is analyzed in terms of the advantages and the risks.” Reflecting on Llama: ‘a bunch of small things done really well’ Angela Fan, a Meta FAIR research scientist who worked on the original Llama, said she also worked on Llama 2 and the efforts to convert these models into the user-facing product capabilities that Meta showed off at its Connect developer conference last month (some of which have caused controversy, like its newly-launched stickers and characters ).
“I think the biggest reflection I have is even though the technology is still kind of nascent and almost squishy across the industry, it’s at a point where we can build some really interesting stuff and we’re able to do this kind of integration across all our apps in a really consistent way,” she told VentureBeat in an interview at Connect.
She added that the company looks for feedback from its developer community, as well as the ecosystem of startups using Llama for a variety of different applications. “We want to know, what do people think about Llama 2? What should we put into Llama 3?” she said.
But Llama’s secret sauce all along, she said, has been “a bunch of small things done really well and right over a longer period of time.” There were so many different components, she recalled — like getting the original data set right, figuring out the number of parameters and pre-training it on the right learning rate schedule.
“There were many small experiments that we learned from,” she said, adding that for someone who doesn’t understand AI research, it can seem “like a mad scientist sitting somewhere. But it’s truly just a lot of hard work.” The push to protect open-source AI A big open-source ecosystem with a broadly useful technology has been “our thesis all along,” said Vipul Ved Prakash, co-founder of Together , a startup known for creating the RedPajama dataset in April, which replicated the Llama dataset, and releasing a full-stack platform and cloud service for developers at startups and enterprises to build open-source AI — including by building on Llama 2.
Prakash, not surprisingly, agreed that he considers Llama and open-source AI to be the game-changers of 2023 — it is a story, he explained, of developing viable, high-quality models, with a network of companies and organizations building on them.
“The cost is distributed across this network and then when you’re providing fine tuning or inference, you don’t have to amortize the cost of the model builds,” he said.
But at the moment, open-source AI proponents feel the need to push to protect access to these LLMs as regulators circle. At the U.K. Safety Summit this week, the overarching theme of the event was to mitigate the risk of advanced AI systems wiping out humanity if it falls into the hands of bad actors — presumably with access to open-source AI.
But a vocal group from the open source AI community, led by LeCun and Google Brain co-founder Andrew Ng, signed a statement published by Mozilla saying that open AI is “an antidote, not a poison.” Sriram Krishnan, a general partner at Andreessen Horowitz, tweete d in support of Llama and open-source AI: “Realizing how important it was for @ylecun and team to get llama2 out of the door. A) they may have never had a chance to later legally B) we would have never seen what is possible with open source ( see all the work downstream of llama2) and thought of LLMs as the birthright of 2-4 companies.” The Llama vs. ChatGPT debate continues The debate over Llama vs. ChatGPT — as well as the debate over open source vs. closed source generally — will surely continue. When I reached out to a variety of experts to get their thoughts, it was ChatGPT for the win.
“Hands down, ChatGPT,” wrote Nikolaos Vasiloglou, VP of ML research at RelationalAI.
“The reason it is a game-changer is not just its AI capabilities, but also the engineering that is behind it and its unbeatable operational costs to run it.” And John Lyotier, CEO of TravelAI, wrote: “Without a doubt the clear winner would be ChatGPT. It has become AI in the minds of the public. People who would never have considered themselves technologists are suddenly using it and they are introducing their friends and families to AI via ChatGPT. It has become the ‘every-day person’s AI.’” Then there was Ben James, CEO of Atlas, a 3D generative AI platform, who pointed out that Llama has reignited research in a way ChatGPT did not, and this will bring about stronger, longer-term impact.
“ChatGPT was the clear game-changer of 2023, but Llama will be the game-changer of the future,” he said.
Ultimately, perhaps what I’m trying to say — that Llama and open source AI win 2023 because of how it will impact 2024 and beyond — is similar to the way Forrester’s Curran puts it: “The zeitgeist generative AI created in 2023 would not have happened without something like ChatGPT, and the sheer number of humans who have now had the chance to interact with and experience these advanced tools, compared to other cutting edge technologies in history, is staggering,” he said.
But, he added, open source models – and particularly those like Llama 2 which have seen significant uptake from enterprise developers — are providing a lot of the ongoing fuel for the on-the-ground development and advancement of the space.
In the long term, Curran said, there will be a place for both proprietary and open source models, but without the open source community, the generative AI space would be a much less advanced, very niche market, rather than a technology that has the potential for massive impacts across many aspects of work and life.
“The open source community has been and will be where many of the significant long-term impacts come from, and the open source community is essential for GenAI’s success,” he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,042 | 2,023 |
"Dell customizes GenAI and focuses on data lakehouse | VentureBeat"
|
"https://venturebeat.com/ai/dell-customizes-genai-and-focuses-on-data-lakehouse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dell customizes GenAI and focuses on data lakehouse Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Dell Technologies is growing its portfolio of generative AI products and services to help more of its customers harness the power of artificial intelligence.
Today Dell announced a series of initiatives that expand on the company’s generative AI efforts that it has been incrementally rolling out since early 2023. Back in May, Dell announced Project Helix in partnership with Nvidia as an effort to bring the power of large language models (LLMs) to on premises environments with Dell hardware. A few months later in July, Dell and Nvidia announced the first fruits of the Project Helix effort with validated designs for running AI inference workloads and professional services to support enterprise deployments. Now Dell is going a step further with validated designs for model customization with Nvidia to help organizations fine tune AI.
Dell is also now detailing its strategy for enabling data for generative AI, with an open data lakehouse platform that benefits from a partnership with data query platform vendor Starburst.
“One note that I want to make about enterprise is that we’re seeing obviously seeing a lot of interest in this area and I think that it is somewhat early days in the deployment of generative AI on premises,” Carol Wilder, vp for cross portfolio software and solutions at Dell Technologies said during a press briefing. “Dell is committed to providing the best solutions and options to our customers so that they have the flexibility and resilience in order to address the business outcomes that they’re trying to achieve.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Dell grows GenAI efforts beyond inference to model customization The first iteration of Dell Validated Designs for Generative AI with Nvidia was just about inferencing. That solution enables organizations to run an optimized stack of Dell and Nvidia hardware, along with optimized software for AI inference.
She explained that with the inference solution, the models are pre-trained and ready to deploy. With model customization, Wilder said that there are more resource requirements needed in terms of hardware capabilities, than what inference typically demands.
With the new model customization offering Dell is looking to enable organizations to tune models that are specifically optimized for an enterprise’s use case with its own data. Among the use cases that Dell expects the model customization service to be used for are: virtual assistants, content generation and software development.
Wilder noted that the benefit of customization is that enterprises will get optimized models for their own deployment as well as the benefit of models that optimize hardware usage.
Dell’s vision for an open, modern data lakehouse Being able to fine tune as well as train generative AI is a process that relies on data, lots and lots of data.
For enterprise use cases, that data isn’t just generic data taken from a public source, but rather is data that an organization already has in its data centers or cloud deployments and is likely also spread across multiple locations. To help enable enterprises to fully benefit from data for generative AI, Dell is building out an open data lakehouse platform.
The data lakehouse concept is one that was originally pioneered by Databricks, as a way of enabling organizations to more easily query data stored in cloud object storage based data lakes. The Dell approach is a bit more nuanced in that it is taking a hybrid approach to data, with a goal of being able to query data across on-premises as well as mutli-cloud deployments.
Greg Findlen, senior VP data management at Dell explained during the press briefing that the open data lakehouse will be able to use Dell storage and compute capabilities as well as multi-cloud storage. On top of the storage, Dell will be integrating the Starburst Enterprise platform which provides data query and management capabilities enabling data from disparate sources to be used for data analytics and AI.
“We also want to make sure that customers can discover, integrate and process data across the organization, that’s one of the big reasons why we have partnered with Starburst,” Findlen said.
He noted that with the Starburst integration for Dell’s data lakehouse effort, organizations will be able to leverage the data where it exists. Findlen emphasized that the number one priority for the lakehouse effort is making sure that Dell can accelerate how quickly data science teams and the AI developer teams can get access to data from across the organization.
From Findlen’s perspective, the growth of generative AI has helped to reinforce the primary importance of data for enterprises overall.
“I’m excited by how GenAI has really put a lot more focus on these technologies and the importance of data within the enterprise and the importance of protecting the data that you have and making sure that it stays private but also enabling customers to accelerate their businesses,” Findlen said. “It’s important to think about how all the different kinds of data in the enterprise can feed the many use cases for AI.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,043 | 2,023 |
"Runway launches new 'Watch' feature as CEO says Hollywood AI discourse 'needs to be more nuanced' | VentureBeat"
|
"https://venturebeat.com/ai/runway-launches-new-watch-feature-as-ceo-says-hollywood-ai-discourse-needs-to-be-more-nuanced"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Runway launches new ‘Watch’ feature as CEO says Hollywood AI discourse ‘needs to be more nuanced’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As Hollywood strikes by actors and writers continue, with the impact of generative AI on their industry and jobs a central concern, Runway CEO Cristóbal Valenzuela knows his gen AI video startup — most recently valued at $1.5 billion — is under fire from those on the picket line.
But when I visited the company’s surprisingly spartan Manhattan headquarters last week, Valenzuela told me that while he doesn’t want to dismiss the concerns of writers and actors around their likenesses being generated by AI, or their film-industry jobs being replaced by AI, he believes the conversation around Hollywood and AI “needs to be more nuanced.” “I empathize with the artistic community who might feel threatened or who might have questions,” he said. “At the same time, when you speak with the creators or filmmakers, you start understanding that it’s different from a singular point of view that this is going to replace everything, because it’s not — it’s going to augment a lot of other things as well.” Hollywood’s pushback on AI hasn’t kept the New York City-based company from its efforts to build a community of artists and filmmakers and to support and promote their AI-generated output. In March, Runway held its first annual AI Film Festival , and today it launched a new feature on its website and iOS app called Watch — which allows users to share and consume longer-form videos created with Runway tools.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “A lot of what we’re working towards is both democratizing and making these tools more convenient, but also showcasing the stories being made with those tools,” said Valenzuela. “We really need to highlight the great and positive outcomes with technology. One of those efforts is by showcasing them in the Watch section.” Runway founders bonded over digital art Runway’s offices are located in an unpretentious Tribeca building just a block below noisy Canal Street, abutting a graffiti-filled alleyway. Upon entering, there are no immediate physical clues that the office is in fact the home base of one of the industry’s hottest gen AI startups, which drew a fresh infusion of $141 million last month from Google, Nvidia and Salesforce, among other investors.
Other than a few art posters and a shelf filled with books about design, I was also surprised that the Runway offices don’t show off much evidence of the company’s artistic bona fides.
Originally from Chile, Valenzuela earned a bachelor’s degree in economics and business management, and then a master’s degree in arts and design in 2012. In 2018, he became a researcher at New York University’s Tisch School of the Arts’ Interactive Telecommunications Program ( ITP ), which is sometimes described as an art school for engineers — or an engineering school for artists.
That year, Valenzuela also founded Runway with Tisch colleagues Anastasis Germanidis and Alejandro Matamala Ortiz after the trio bonded over a mutual interest in using digital tools for design. Today, in addition to its initial text-to-video generative AI offering, Runway provides image-to-video, video-to-video, 3D texture, video editing and AI training options.
Early text-to-video typewriter foreshadowed generative AI While Valenzuela said he has always experimented with artistic mediums and techniques, the things he has exhibited have been digital art. One early interactive art project called “Regression,” exhibited at a museum in Chile in 2012, makes it crystal clear that the concept of text-to-video has been on his mind for over a decade.
“It was an old typewriter from my grandpa,” he said. “I connected and built a network of the keystrokes of the typewriter. Imagine a pedestal with a typewriter and a set of white walls. Every keystroke was connected to one another and went to computer software I wrote so that every time you wrote, videos were projected — you were typing words in a physical device and everything you were typing was being recorded in this infinite piece of paper.” The videos were not generated back then, of course, but rather pre-existing videos Valenzuela assembled. “But that was the type of thing that was interesting,” he explained. These days, he says he doesn’t practice making much traditional art: “My art right now is building Runway.” ‘The type of creative outputs we’re trying to provoke’ In June, “ Genesis ,” a cinematic, 45-second-long sci-fi movie trailer posted by Nicolas Neubert , quickly went viral, with millions of views and coverage on CNN and in Forbes. It was Gen2 , a new gen AI video creation tool.
“Genesis was so great,” said Valenzuela. “I think that’s exactly the type of creative outputs that we’re trying to provoke. It’s great to see those kinds of things being put out there.” He added that it’s “incredible” to know how fast the process was for the creator, but also that the amount of work that was behind it was still significant.
“I think the biggest takeaway is that this trailer, and the many more that we’ve seen coming out, are not just generated with a word, which is what most people think,” he said, pointing to the language models that “have overtaken the public discourse, where everything is reduced to chatbots where you prompt something and you get something out.” Instead, he explained, “you’re making videos, you’re making art — you’re making something that’s visual. It’s all about iteration and doing it multiple times until you pick the one that you like, and then double down on that.” Then, he said, you get to a point where you have a story that you piece together and create something “as beautiful and as weird as he did.” But that whole process, Valenzuela said, “might be misunderstood — as if AI is some sort of automated system that creates everything for you.” Unlike his 2012 interactive art project, it is not possible to simply type a few words and get a fully fleshed-out trailer or movie.
“That’s a very reductionist view of how filmmaking works, but secondly, how art works,” he pointed out. “Just because you have a canvas and paint, you’re not going to become an artist. You need to paint a lot.” At the intersection of art and technology When I asked Valenzuela if it feels strange being in the middle of the conversation around the intersection of art and technology, he said that it does — particularly since the three founders come from exactly that background. What feels different these days, he said, is the mainstream conversation.
“It’s great to see that this has piqued the interest of more people, that more people are questioning what the role of technology like AI is, and the role of art,” he said. “We’ve been working on this for so much time, and we have so many insights on how to best drive both the technology and the conversations forward. I think we need to do that more broadly now that it’s become more mainstream.” What he wants, Valenzuela emphasized, is for people to experiment with Runway’s tools before passing judgment.
“There’s a lot of human agency behind it, perhaps way more than if you used any other tool,” he said. “We need to get more people to use it, because the misconceptions might come from a place of never actually having used something like this because the technology didn’t exist six months ago.” These days, he added, he spends most of his time “just getting people to experiment with it,” as though it were a new camera.
“If you want to understand how it works, use it,” he said. “This thing is not magical on its own. It’s not going to create a movie; you need to have control over it.” That experimentation and nuance, he added, applies to the entire way AI as a technology is perceived. “It’s a very nuanced world and I want to make sure we don’t trap ourselves and industries that we care a lot about, like filmmaking, into one story about how we collectively think about technology,” he said. “We’re in a moment right now where [AI] is going to change a lot of things. We need more diversity of thought, we need more people with different backgrounds, we need more people from different disciplines speaking about it, and not just one set of people.” That sounded similar to Valenzuela’s own story of bringing art and technology together. “I’ve never been a fan of siloing disciplines — like ‘you’re a painter’ or ‘you’re in sculpture,'” he said. “You’re whatever you want to be. Anyone can be an artist if you’re using something to express a view of the world.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,044 | 2,023 |
"Midjourney's new style tuner is here. Here's how to use it. | VentureBeat"
|
"https://venturebeat.com/ai/midjourneys-new-style-tuner-is-here-heres-how-to-use-it"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Midjourney’s new style tuner is here. Here’s how to use it.
Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Midjourney is one of the most popular AI art and text-to-image generators, generating high-quality photorealistic and cinematic works from users’ prompts typed in plain English that have already wound up on TV and in cinemas (as well as on VentureBeat, where we use it along with other tools for article art).
Conceived by former Magic Leap programmer David Holz and launched in the summer of 2022, it has since attracted a community of more than 16 million users in its server on the separate messaging app Discord, and has been steadily updated by a small team of programmers with new features including panning , vary region and an anime-focused mobile app.
But its latest update launched on the evening of Nov. 1, 2023 — called the style tuner — is arguably the most important yet for enterprises, brands and creators looking to tell cohesive stories in the same style. That’s because Midjourney’s new style tuner allows users to generate their unique visual style and apply it to any and potentially all images generated in the application going forward.
We're now testing V1 of our Midjourney "Style Tuner". Type /tune and render a custom web tool that controls our model's personality. Everything from colors to character detail. Explore aesthetics like never before and share resulting style codes and tuning URLs with friends.
Before style tuning, users had to repeat their text descriptions to generate consistent styles across multiple images — and even this was no guarantee, since Midjourney, like most AI art generators, is built to offer a functionally infinite variety of image styles and types.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Now instead, of relying on their language, users can select between a variety of styles and obtain a code to apply to all their works going forward, keeping them in the same aesthetic family. Midjourney users can also elect to copy and paste their code elsewhere to save it and reference it going forward, or even share it with other Midjourney users in their organization to allow them to generate images in that same style. This is huge for enterprises, brands, and anyone seeking to work on group creative projects in a unified style. Here’s how it works: Where to find Midjourney’s style tuner Going into the Midjourney Discord server, the user can simply type “/tune” followed by their prompt to begin the process of tuning their styles.
For example, let’s say I want to update the background imagery of my product or service website for the winter to include more snowy scenes and cozy spaces.
I can type in a single prompt idea I have — “a robot wears a cozy sweater and sits in front of a fireplace drinking hot chocolate out of a mug” — after the “/tune,” like this: “/tune a robot wears a cozy sweater and sits in front of a fireplace drinking hot chocolate out of a mug.” Midjourney’s Discord bot responds with a large automatic message explaining the style-tuning process at a high level and asking if the user wants to continue. The process requires a paid Midjourney subscription plan (they start at $10 per month paid monthly or $96 per year up-front) and uses up some of the fast hours GPU credits that come with each plan (and vary depending on the plan tier level, with more expensive plans granting more fast hours GPU credits). These credits are used for generating images more rapidly than the “relaxed” mode.
Selecting style directions and mode and what they mean This message includes two drop-down menus allowing the user to select different options: the number of “style directions” (16, 32, 64, or 128) and the “mode” (default or raw).
The “style directions” setting indicates how many different images Midjourney will generate from the user’s prompts, each one showing a distinctly different style. The user will then have the chance to choose their style from between these images, or combine the resulting images to create a new meta-style based on several of them.
Importantly, the different numbers of images produced by the different style direction options each cost a different amount of fast hours GPU credits. For instance, 16 style directions use up 0.15 fast hours of GPU credits, while 128 style directions use up 1.2 credits. So the user should think hard and discerningly about how many different styles they want to generate and whether they want to spend all those credits.
Meanwhile, the “mode” setting is binary, allowing the user to choose between default or raw, referencing how candid and grainy the photos will appear. Raw images are meant to look more like a film or DLSR camera and as such, may be more photorealistic, but also contain artifacts that the default, sanitized and smooth mode does not.
In our walkthrough for this article, VentureBeat selected 16 style directions and default mode. In our tests, and those reported by several users online, Midjourney was erroneously giving users one additional level up of style directions than they asked for — so in our case, we got 32 even though we asked for 16.
After selecting your mode and style directions, the Midjourney bot will ask you if you are sure you want to continue and show you again how many credits you’re using up, and if you press the green button, you can continue. The process can take up to 2 minutes.
Where to find the different styles to choose from After Midjourney finishes processing your style tuner options, the bot should respond with a message saying “Style Tuner Ready! Your custom style tuner has finished generating. You can now view, share and generate styles here:” followed by a URL to the Midjourney Tuner website (the domain is tuner.midjourney.com).
The resulting URL should contain a random string of letters and numbers at the end. We’ve removed ours for security purposes in the screenshot below.
Clicking the URL takes the user out of the Discord app and onto the Midjourney website in your browser.
There, the user will see a customized yet default message from Midjourney showing the user’s prompt language and explaining how to finish the tuning process. Namely, Midjourney asks the user to select between two different options with labeled buttons: “Compare two styles at a time” or “Pick your favorite from a big grid.” In the first instance, “compare two styles at a time” Midjourney displays the resulting grid of whatever number of images you selected previously in the style directions option in Discord in rows of two. In our case, that’s 16 rows. However, each row contains two 4×4 image grids, so 8 images per row.
The user can then choose one 4×4 grid from each row, of however many rows they would like, and Midjourney will make a style informed by the combination of those grids. You can tell which grid is selected by the white outline that appears around it.
So, if I chose the image on the right from the first row, and the image on the left from the bottom row, Midjourney would apply both of those image styles into a combined style and the user could apply that combined style to all images going forward. As Midjourney notes on the bottom of this selection page, selecting more choices from each row results in a more “nuanced and aligned” style while selecting only a few options will result in a “bold style.” The second option, “Pick your favorite from a big grid,” lets the user choose just one image from the entire grid of all images generated from according to the number of style directions the user set previously. In our case for this article, that’s a total of 32 images arranged in an 8×4 grid. This option is more precise and less ambiguous than the “compare two styles” option, but also more limiting as a result.
In our case, for this article, we will select the “compare two styles at a time,” select 5 grids total and leave it to the algorithms to decide what the combined style looks like.
Applying your freshly tuned style going forward to new images and prompts Whatever number of rows or images a user selects to base their style on, Midjourney will automatically apply that style and turn it into a shortcode of numerals and letters that the user can manually copy and paste for all prompts going forward. That shortcode appears in several places at the bottom of the user’s unique Style Tuner page, both in a section marked “Your code is:” followed by the code, and then also in a sample prompt based on the original the user provided at the very bottom in a persistent overlay chyron element.
The user can then either copy this code and save it somewhere, or copy their entire original prompt with the code added from the bottom chyron. You can also redo this whole style by pressing the small “refresh” icon at the bottom (circular arrows).
Then, the user will need to return to the Midjourney Discord server and paste the code in after their prompt as follows: “imagine/ a robot wears a cozy sweater and sits in front of a fireplace drinking hot chocolate out of a mug –style [INSERT STYLE CODE HERE]” Here’s our resulting grid of 4×4 images using the original prompt and our freshly generated style: We like the fourth one best, so we will select that one to upscale by clicking “U4” and voila, there is our resulting cozy robot drinking hot chocolate by the fireplace! Now let’s apply the same style to a new prompt by copying and pasting/manually adding the “–style” language to the end of our new prompt, like so: “a robot family opens presents –style [INSERT STYLE CODE HERE]” Here’s the result (after choosing one from our 4×4 grid): Not bad! Note this is after a few regenerations going back and forth. The style code also works alongside other parameters in your prompt, including aspect ratio/dimensions. Here’s a 16:9 version using the same prompt but written like so: “a robot family opens presents –ar 16:9 –style [INSERT STYLE CODE HERE]” Cute but a little wonky. We might suggest continuing to refine this one.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,045 | 2,021 |
"VB Lab | VentureBeat"
|
"https://venturebeat.com/vb-lab"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Innovating for brand partners, thoughtfully.
Email Δ document.getElementById( "ak_js_1" ).setAttribute( "value", ( new Date() ).getTime() ); Our Process Our Work Custom Marketing Solutions Our Reach Establish your authority and influence as a global thought leader with VB Lab.
OUR PROCESS Listen with curiosity We listen to your marketing and business needs with curiosity and a journalistic eye to ideate and create new, custom out-of -the box solutions that exceed your company’s unique goals.
Ideate with a winning team Our industry-leading thought leadership strategists, branded content, product, and customer success experts work with your team to create innovative go-to-market opportunities.
Create category thought leadership Leverage our reach and influence leading technology business decision makers who are ready to transact.
Brands that have partnered with VB Lab VB Lab creates custom opportunities which spotlight partner expertise and fulfill KPIs in the most creative, disruptive ways.
OUR WORK Facebook The 2021 gaming landscape: What developers, publishers, and marketers need to know Change Healthcare The API revolution that’s securing the future of virtual health care Bold360 2020 CX predictions: Strategies that double revenue and exceed customer expectations Samsung BioTech: Accelerating innovation in health care ……… AWS Using machine learning to tackle the world’s biggest problems Microsoft Azure Remove your ETL bottleneck and let analytics flow Microsoft Azure The citizen data scientist’s time has arrived … Beyond Limits Beyond Conventional AI: More Intelligent, More Explainable AI “From sponsored articles and webinars to a virtual gaming summit, our paid partnership with VentureBeat Lab yielded strong results and helped us to achieve key business goals. Thanks to VB Lab’s highly collaborative style and (VentureBeat’s) impressive reach, Facebook was able to connect with members of the gaming community in new and exciting ways—even in the midst of the COVID-19 crisis.” “Teaming up with VB Lab at VentureBeat and GamesBeat is an important partnership to help us share our story and vision for the future of the video game industry. By partnering with one of the most innovative media platforms, we can shape the story on the importance of making games available to everyone worldwide. We are excited to collaborate and to share the vision for the future of what is possible in the video game and fintech industry.” “For games industry, web3 gaming and media brands to command dominance in the business of video gaming, the only way to cut through the noise is via strong news campaigns, and tier one media partnerships like we are proud to have with the formidable VentureBeat/GamesBeat brands. Its network, and fully integrated VB Labs solutions, have proven their worth with Raptor PR’s portfolio, enabling tier one fame building and lead generation for our clients to reach stakeholders within the gaming world.” ““VB Lab has been a key partner for many years in helping NVIDIA promote its GPU Technology Conference. Their team takes the time to understand our conference goals, and they have a gift for explaining why developers and executives should attend. We value our close relationship, which has returned great results for us.” CUSTOM MARKETING SOLUTIONS Showcase your expertise and create the most impact, with the most relevant audience.
Generate qualified leads for your sales team Lead thought-provoking conversations and showcase your expertise Create resonant experiences Strategic Consulting VB Lab works with your team to identify innovative opportunities. As your partner, we apply our comprehensive understanding of tech audiences and the best ways for you to interact with business decision makers across channels. It’s in our DNA to build authentic marketing strategies through the lens of a journalistic eye.
Storytelling VB Lab introduced a Thought Leadership Platform for B2B marketers in transformative technology designed to help brands lead thought-provoking conversations and create an authentic experience that will resonate with their core audiences. Utilize VentureBeat’s expertise in tech journalism by gaining an understanding of what type of content to use and when. We develop the strategy, create, produce, and distribute custom forms of content across our platforms, that’s meaningful and impactful.
Innovative Product Development Brands are constantly craving new innovative solutions to engage with their audience. VB Lab creates custom opportunities for partners to spotlight their expertise and fulfill KPIs in the most creative, disruptive ways.
Research and Insights Utilize our research and insights team to develop comprehensive reports, surveys, infographics, case studies, use cases and compelling data to showcase your expertise to VentureBeat’s core audience of business decision-makers.
Digital Marketing VB Lab produces videos, creates custom native ads and works with your team to design interactive digital marketing experiences that align with your brand.
VB Lab designs thoughtful go-to-market strategies that distributes your content across our Thought Leadership Platform including: Custom virtual events Thought leadership content insight series Event Speaking Opportunities Interactive Branded Storytelling Live Podcasts Event Video Series Production Surveys, Research, and Insight Partnerships INFLUENCE THE INFLUENTIAL: REACH BUSINESS DECISION MAKERS VentureBeat is obsessed with covering transformative technology, and has built the most influential community of leading technology business decision-makers. With VB Lab, every message is created with the audience first in mind.
VentureBeat covers disruptive technology and explains why it matters in our lives. We’re the leading publication for news and perspective on the most innovative technologies, and we also bring the community together several times per year through executive-level conferences.
C-level executives 40% AI Coverage #1 Game Coverage #2 Business decision makers 80% AI news voice #1 AI channel page views 20 mil Build thought leadership and connect with business decision makers VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,046 | 2,023 |
"With U.S. tech salaries at a 5-year-low, here's how to make more money | VentureBeat"
|
"https://venturebeat.com/programming-development/with-u-s-tech-salaries-at-a-5-year-low-heres-how-to-make-more-money"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Jobs With U.S. tech salaries at a 5-year-low, here’s how to make more money Share on Facebook Share on X Share on LinkedIn Amidst a turbulent job market and rising inflation, new data is showing that tech workers in the U.S. are experiencing a drop in salaries.
Hired’s 2023 State of Tech Salaries report has revealed that while those seeking tech jobs in the U.S. seemed to maintain their salary expectations, once this figure is adjusted for inflation the reality is that salaries have dipped to their lowest point in five years.
Why U.S. tech salaries have declined Though average tech salaries hover at the $158,000 mark (nearly double the average U.S. knowledge worker), the combined impact of layoffs in the tech sector, hiring freezes, inflation and the explosion of generative AI have resulted in salary stagnation across the board.
The survey of more than 1,300 tech professionals reported that 54% of workers haven’t seen their salary rise at the rate of inflation. Once it’s factored in, salaries are down 6% for remote workers and 9% for those in-office roles, compared to 2022.
Experienced employees were less likely to see their salary decrease, while junior staffers (those with less than four years of experience) are seeing the most dramatic dip at nearly 5% year on year.
This, alongside a drop in demand for less experienced roles (down from 45% in 2019 to 25% in the first half of 2023), means this cohort will be feeling the pinch the most.
Smart strategies for increasing your earning potential As if they weren’t already tricky enough, salary negotiations can be even more challenging during times of big market changes or uncertainty. So how can you earn more money? Read on for some tips on increasing your earning potential Know your worth Information is power, so make sure to research your role and industry’s average salaries (taking into account experience and location) so that you know your worth before entering into any salary conversations.
You can use online salary calculators, scour competitors’ hiring pages for salary band info (many states, including California, Washington, Colorado and New York City, mandate the inclusion of a salary range in job postings), or simply ask colleagues and friends who are in similar fields.
Look beyond tech companies for tech roles As the tech landscape has developed and matured, so too have the variety of roles, with almost every single industry needing tech workers of some kind.
To widen your pool, explore job openings at places beyond traditional tech companies. Industries whose core offering isn’t technology-based would have been less affected by recent volatility.
Consider relocating to a lower-cost-of-living location Earning more money isn’t just about salary but about actual living expenses. The Hired report showed that, once you adjust for the cost of living, workers in cities like Houston, Atlanta, Philadelphia and Phoenix were offered $40K more than their counterparts in San Francisco.
Many employers have decreased the number of open roles in high-cost-of-living markets. For example, positions based in San Francisco dropped from 38% in 2020 to just 19% in the first half of 2023. In the meantime, jobs in lower cost-of-living markets more than quadrupled, from 2% in 2020 to 9% in the first half of 2023.
If you find yourself considering a move, the VentureBeat Job Board features thousands of tech jobs from across the U.S., including the three below..
NET Software Engineer, Noir Consulting, Phoenix Noir Consulting is seeking a.
Net Software Engineer to work with a well-established national marketing firm, creating a product that leaves a significant and enduring impact on people’s marketing endeavors. In this hybrid role, you’ll need a good grasp of C#, .NET Core, and SQL Server, as well as a thorough understanding of Agile methodologies. The role also offers the opportunity to avail of industry-recognized training in .NET6+, Azure, Microservices and Angular/React.
See all the requirements for this role.
Senior Site Reliability Engineer, First American, Remote Named one of the Fortune 100 Best Companies to Work For, First American has grown from a small family-run business in 1889 to a $9 billion organization spanning more than 700 offices and 20,000 employees. As a Senior Site Reliability Engineer , you will support mission-critical software systems and work to automate IT infrastructure tasks. At least five years’ experience and a strong understanding of site reliability engineering best practices – including incident response, release management and capacity planning – are required for the role. Interested? Apply for this job now.
Senior Software Engineer, Resource Data, Inc., Houston Resource Data serves as a technology partner to businesses across the U.S., offering IT business consulting, software services, systems engineering as well as data and analytics services. As a Senior Software Engineer , you’ll work on a diverse range of projects spanning different industries, tech stacks and systems. You are known for being an innovative and decisive problem solver, adept at balancing business needs, client budgets and user demands.
Explore the role here.
Find your next great tech role by visiting the VentureBeat Job Board today.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,047 | 2,023 |
"Samir Tabar shows how to navigate the crypto ecosystem | VentureBeat"
|
"https://venturebeat.com/business/why-business-leaders-should-carefully-navigate-cryptocurrency-investments"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Contributor Content Samir Tabar shows how to navigate the crypto ecosystem Share on Facebook Share on X Share on LinkedIn Samir Tabar, CEO, Bit Digital Founders and business leaders are no strangers to embracing innovation and harnessing emerging opportunities. Cryptocurrency, especially Bitcoin, has been a buzzworthy topic that’s hard to ignore. However, alongside the allure of potential profits lies a pressing concern — the environmental impact of cryptocurrency mining. For business leaders who prioritize sustainability and responsible investing, understanding and addressing these concerns are paramount.
Cryptocurrency, epitomized by Bitcoin, has introduced a revolutionary digital concept underpinned by blockchain technology. The decentralized nature and potential for borderless transactions offer an exciting vision of financial innovation. Business leaders, ever attuned to emerging trends, may see crypto as both an investment opportunity and a way to reshape their industries. Yet, before diving headfirst into the crypto realm, prudent consideration is crucial.
Looking at environmental costs While crypto presents opportunities, its colossal energy consumption is undeniable. Bitcoin mining, the process by which transactions are verified and added to the blockchain, relies on energy-intensive computational tasks. The environmental footprint of this process, particularly its carbon emissions, raises significant concerns. Therefore, business leaders committed to sustainability should heavily scrutinize the investments they make.
Sustainable renewable energy holds the key to transforming the Bitcoin mining carbon footprint narrative. Today, miners seek out regions where renewable resources are abundant, whether it be wind, solar, hydroelectric or geothermal, to mitigate environmental impact. A pioneer in this area is Bit Digital, whose CEO, Samir Tabar and his co-founders moved its considerable Bitcoin mining operations from China to the United States, Canada, and Iceland to implement more sustainable practices and reduce its carbon footprint with renewable energy sources. As of this year, Bit Digital utilizes 99% carbon-free energy to power its miners.
The transition of Bitcoin mining’s energy usage to sustainable, renewable sources of energy demonstrates a commitment to environmental responsibility. By strategically harnessing renewable energy, miners like Bit Digital are not only reducing their carbon emissions but also contributing to broader sustainability goals. As more industry players begin to follow suit, Bitcoin mining may very well be a part of the solution to our global energy challenges rather than a problem.
Prioritizing ethical investments Business leaders often grapple with ethical choices in their ventures. The cryptocurrency space is no exception. The energy consumption associated with mining activities demands an examination of the alignment between investment choices and environmental values. Leaders must exercise due diligence in assessing the sustainability practices of crypto projects they consider and should prioritize investment in projects committed to carbon-neutral or energy-efficient mining such as Bit Digital. Doing so will help incentivize the rest of the industry to move over to sustainable operations.
As the crypto industry evolves, calls for regulatory oversight are growing louder. Governments worldwide are considering frameworks to manage the environmental impacts and financial risks associated with cryptocurrencies. Business leaders must vigilantly monitor regulatory developments to ensure they remain compliant while navigating the industry’s complexities. Simultaneously, investing in innovative technologies that drive energy efficiency within the crypto space now can contribute to an environmentally sustainable future while also safeguarding investments against future regulatory changes that may arise.
The imperative of responsible crypto investment Cryptocurrency investment offers an intriguing avenue for growth and diversification, but it also presents a moral challenge: How can business leaders balance the potential rewards with the environmental toll? Acknowledging and mitigating the environmental impact of cryptocurrency mining is not just an ethical choice; it’s a practical necessity.
In the quest for sustainable investment, leaders can pursue a multifaceted approach: diversify their portfolios to manage risk, demand transparency and commitment to sustainability from crypto projects, stay informed about evolving regulations, and support technological innovation that prioritizes energy efficiency.
As stewards of the business landscape, entrepreneurs have an opportunity to shape the crypto industry’s trajectory by prioritizing environmental responsibility. By marrying innovation with a deep sense of ethical duty, business leaders can drive positive change, not only for their portfolios but for the planet as well. The future of cryptocurrency investment is inextricably linked to its sustainability, and it’s the leaders’ choices today that will determine the industry’s tomorrow.
VentureBeat newsroom and editorial staff were not involved in the creation of this content.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,048 | 2,023 |
"Snorkel AI Awarded Air Force Contract to Automate Data Labeling | VentureBeat"
|
"https://venturebeat.com/business/snorkel-ai-awarded-air-force-contract-to-automate-data-labeling"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Snorkel AI Awarded Air Force Contract to Automate Data Labeling Share on Facebook Share on X Share on LinkedIn AFWERX selects Snorkel AI for a $1.24M SBIR Phase II contract for Automated Data Labeling of ISR Sensor Data SAN FRANCISCO–(BUSINESS WIRE)–October 31, 2023– Snorkel AI today announced it has been selected by AFWERX for an SBIR Phase II contract in the amount of $1.24 million focused on automated data labeling of intelligence, surveillance, and reconnaissance (ISR) sensor data to address the most pressing challenges in the Department of the Air Force (DAF). The Air Force Research Laboratory and AFWERX have partnered to streamline the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) process by accelerating the small business experience through faster proposal to award timelines, changing the pool of potential applicants by expanding opportunities to small business and eliminating bureaucratic overhead by continually implementing process improvement changes in contract execution.
The DAF began offering the Open Topic SBIR/STTR program in 2018 which expanded the range of innovations the DAF funded and now on September 20, 2023, Snorkel AI will start its journey to create and provide innovative capabilities that will strengthen the national defense of the United States of America.
“Snorkel AI’s data-centric AI approach with programmatic labeling is well suited to build mission AI applications to identify events of interest in real-time,” said Alex Ratner, CEO and co-founder, Snorkel AI, “Our Phase II award supports the United States Air Force’s quest to rapidly capitalize on emerging AI technology.” The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force, the Department of Defense, or the U.S. government.
About Snorkel AI, Inc.
Founded by a team spun out of the Stanford AI Lab, Snorkel AI makes AI application development fast and practical by unlocking the power of machine learning without the bottleneck of manually-labeled training data. Snorkel Flow is the first data-centric AI platform powered by programmatic labeling. Backed by Addition, Greylock, GV, In-Q-Tel, Lightspeed Venture Partners and funds and accounts managed by BlackRock, the company is based in Palo Alto. For more information on Snorkel AI, please visit: https://www.snorkel.ai/ or follow @SnorkelAI.
About AFRL The Air Force Research Laboratory is the primary scientific research and development center for the Department of the Air Force. AFRL plays an integral role in leading the discovery, development, and integration of affordable warfighting technologies for our air, space and cyberspace force. With a workforce of more than 12,500 across nine technology areas and 40 other operations across the globe, AFRL provides a diverse portfolio of science and technology ranging from fundamental to advanced research and technology development. For more information, visit www.afresearchlab.com.
About AFWERX As the innovation arm of the DAF and a directorate within the Air Force Research Laboratory, AFWERX brings cutting-edge American ingenuity from small businesses and start-ups to address the most pressing challenges of the DAF. AFWERX employs approximately 325 military, civilian and contractor personnel at six hubs and sites executing an annual $1.4 billion budget. Since 2019, AFWERX has executed 4,697 contracts worth more than $2.6 billion to strengthen the U.S. defense industrial base and drive faster technology transition to operational capability. For more information, visit: www.afwerx.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20231031112824/en/ Ignacio Ramirez [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,049 | 2,023 |
"Manny Brown Becomes Investor in Formation Games | VentureBeat"
|
"https://venturebeat.com/business/manny-brown-becomes-investor-in-formation-games"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Manny Brown Becomes Investor in Formation Games Share on Facebook Share on X Share on LinkedIn Manny and Formation Games to create ownership game and give a deeper insight into football club ownership in new video series with his own football club, Under The Radar FC LONDON–(BUSINESS WIRE)–October 31, 2023– Formation Games, developer of upcoming mobile football ownership game CLUB , today announced a significant partnership with YouTube superstar Manny Brown. In addition to investing in the studio, Manny and Formation Games will team up for a documentary on Under The Radar FC, peeling back the curtain on football club ownership.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20231031939677/en/ YouTube superstar Manny Brown in CLUB kit. Manny joins the formidable array of Formation Games backers from the worlds of gaming, football, media and entertainment. (Photo: Business Wire) Manny is an iconic figure in the football community and a major creator across social media platforms, with over 2 million subscribers and 400 million views across his channels. He was also the star of the recent Sidemen charity match at the London Stadium, scoring a hat-trick in front of a capacity crowd of 62,000 people.
Manny joins the formidable array of Formation Games backers from the worlds of gaming, football, media and entertainment. His unique perspective as the owner of Under The Radar FC, a club he founded 5 years ago, will provide a significant voice to aid Formation Games in the development of their upcoming football ownership title CLUB.
“It’s brilliant to be involved with Formation Games as they develop CLUB.
Football and video games are two of my biggest passions, so being able to combine both was an easy decision,” said Manny. “As someone so heavily involved in club ownership, I’m looking forward to helping shape a new football game and create content that shows what goes into club ownership at the grassroots level.” “We’re really proud to welcome Manny as an investor in Formation Games and to launch our new partnership with Under The Radar FC,” said Tom Russell, Marketing Director. “Manny understands club ownership firsthand. His knowledge will be invaluable in creating an authentic game experience that resonates with audiences around the world.” Currently in closed testing, CLUB is the football ownership entertainment experience where Club Owners (COs) build their dream club from the ground up. More than management, COs make crucial decisions on every aspect of their club from their kit, stadium and sponsor, to signing real players based on real-world data and climbing the leagues to continental glory. With a narrative authentic to football culture and strategy gameplay from some of gaming’s brightest development talents and football’s most authoritative leaders, CLUB is a genre-shattering social experience coming to mobile devices in 2024.
Formation Games is led by CEO Jonty Barnes , a games industry veteran of 33 years and former General Manager of Bungie’s Destiny franchise. Formation’s Chair , Alex Horne , is an experienced business leader who was CEO of The English FA for five years. For investment enquiries in Formation Games, please contact [email protected] CLUB is currently in development ahead of testing in 2024. For more information visit ClubGame.app.
Follow Club on Twitter @clubgame_app.
About Formation Games Formation Games was founded in 2021. CLUB is its first title and the first free-to-play football ownership game that allows you to build and own your own football club. The studio believes in bringing players an authentic feeling of club ownership whilst leveraging real-world athlete performances. Learn more at www.clubgame.app View source version on businesswire.com: https://www.businesswire.com/news/home/20231031939677/en/ Gianfranco Lagoia @ Honest PR [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,050 | 2,023 |
"Hardfin Introduces First Technology To Enable Hardware-as-a-Service (HaaS) Companies To Automate Contract-To-Cash Cycle | VentureBeat"
|
"https://venturebeat.com/business/hardfin-introduces-first-technology-to-enable-hardware-as-a-service-haas-companies-to-automate-contract-to-cash-cycle"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Hardfin Introduces First Technology To Enable Hardware-as-a-Service (HaaS) Companies To Automate Contract-To-Cash Cycle Share on Facebook Share on X Share on LinkedIn SAN FRANCISCO–(BUSINESS WIRE)–October 31, 2023– Hardfin , the leader in financial operations for modern hardware companies, announces the release of Control Center to dramatically reduce time spent in the contract-to-cash cycle. Control Center automatically notifies sales, finance, and operations stakeholders of actions needed to keep the business in sync, shortening the billing cycle by 60-70%.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20231031253815/en/ Hardfin’s first-of-its-kind Control Center shortens the contract-to-cash cycle for hardware-as-a-service (HaaS) companies. Intelligent notifications help sales, finance, and operations teams understand immediately what needs to be done, shortening the billing cycle by 60-70%. (Photo: Business Wire) “Decreasing contract-to-cash time is a mission-critical priority for us,” says Manny Guerrero , Chief Financial Officer at Fox Robotics.
“Hardfin has already provided tremendous value to Fox and we’re excited about this expanded capability to optimize our hardware financial operations.” Hardware-as-a-Service (HaaS) companies spend a lot of time tracking down information, usually cobbling together emails, spreadsheets, and data from a CRM or ERP. This means delays across the business, especially in billing operations. Hardfin analysis shows that average days to send invoices (ADS) at hardware companies is often twice as long as average days to collect (ADC).
“We hear every day about the struggle that HaaS companies have keeping information straight across teams. It negatively impacts customers, hurts revenue, slows billing, and causes accounting issues,” says Zachary Kimball , co-founder & CEO of Hardfin. “I’m proud that Control Center is already helping tackle this problem – intelligently surfacing the right actions at the right time to streamline workflow and improve performance across teams.” Cross-functional activity in the hardware industry used to rely on manual process and oversight. Now, Control Center acts automatically to remove the guesswork and deliver consistency, accuracy, and efficiency: Create a subscription for Sales when customer signs a new purchase order Send notifications for the Operations team when a contract is activated Track assets and capitalization for Accounting when assets are shipped or configured Start software billing for Accounts Receivable when assets pass system acceptance Manage accrued/deferred revenue for Finance when invoices are sent Control Center’s powerful automations are enabled by Hardfin’s dynamic linking of assets and contracts. Hardfin is the first software to manage hardware financial operations with linked assets, which makes it possible to track the full lifecycle of hardware subscriptions.
Streamlined actions ensure consistent information flow across functions, solving a major pain point for equipment-as-a-service (EaaS) companies. One major application of this new technology supports the growing robots-as-a-service (RaaS) industry.
For more information, see the Hardfin website.
To learn more about HaaS, read the Hardfin guide to hardware financial operations.
To stay up to date, follow Hardfin and Zachary Kimball on LinkedIn.
View source version on businesswire.com: https://www.businesswire.com/news/home/20231031253815/en/ Madi Waggoner Email: [email protected] Phone: +1 (415) 969-3100 Brand: Media kit VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,051 | 2,023 |
"Casey Terrell, CMO of SPB Hospitality, Joins RAD AI's Advisory Board to Champion AI Adoption in the Hospitality Industry | VentureBeat"
|
"https://venturebeat.com/business/casey-terrell-cmo-of-spb-hospitality-joins-rad-ais-advisory-board-to-champion-ai-adoption-in-the-hospitality-industry"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Casey Terrell, CMO of SPB Hospitality, Joins RAD AI’s Advisory Board to Champion AI Adoption in the Hospitality Industry Share on Facebook Share on X Share on LinkedIn LOS ANGELES–(BUSINESS WIRE)–October 31, 2023– RAD AI , the leader in AI-driven marketing and communication solutions, today welcomed Casey Terrell to its advisory board. Terrell brings over 15 years of marketing leadership experience in the hospitality sector and plays a pivotal role in driving AI adoption for unbiased marketing strategies that enhance customer experiences in the industry.
As the Chief Marketing Officer at SPB Hospitality, Terrell leads the marketing strategy for a diverse portfolio of restaurant brands with a national footprint spanning hundreds of locations. His expertise in marketing technology, marketing communications and hospitality has propelled digital transformation, digital marketing, advertising, and brand initiatives that have significantly enhanced customer experiences, loyalty, and retention.
RAD AI specializes in AI-driven marketing and communication solutions, and its unique approach focuses on eradicating bias in content decision-making and providing marketing professionals with AI-driven creative direction. Its innovative AI technology equips brands with actionable insights for influencer marketing and content creation, transforming customer engagement and loyalty.
Jeremy Barnett, CEO of RAD AI, said, “Casey’s wealth of experience in marketing and his dedication to enhancing customer experiences align perfectly with our mission at RAD AI. His insights will be invaluable in championing AI adoption in the hospitality industry, helping brands create more personalized and unbiased marketing strategies.” Terrell expressed his enthusiasm for the role, stating, “RAD AI’s approach to AI-driven marketing is incredibly innovative and timely. I’m excited to work with the team to further enhance customer experiences within the hospitality sector.” This appointment comes at a time when the hospitality industry is increasingly embracing AI technology to create more personalized and engaging marketing strategies. RAD AI’s AI-powered solutions align perfectly with this trend and offer an opportunity for brands to leverage AI-driven marketing and communication to improve customer experiences.
As an advisor, Terrell will lead initiatives to implement RAD AI’s technology for unbiased marketing and communications in the hospitality industry. His role will involve guiding brands to adopt RAD AI’s cutting-edge technology for unbiased marketing and communication strategies.
For more information about RAD AI and its innovative AI-driven marketing and communication solutions, please visit https://www.radintel.ai.
About RAD AI: RAD AI is a leading innovator in AI-driven marketing and communication solutions. The company’s technology focuses on eradicating bias in content decision-making and providing brands with actionable insights for influencer marketing and content creation. RAD AI’s mission is to transform customer engagement and loyalty through unbiased marketing and communication.
View source version on businesswire.com: https://www.businesswire.com/news/home/20231031865363/en/ Katie Gerber (408) 799-5864 [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,052 | 2,023 |
"When tightly managing costs, smart founders will be rigorous, not ruthless | VentureBeat"
|
"https://venturebeat.com/automation/when-tightly-managing-costs-smart-founders-will-be-rigorous-not-ruthless"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored When tightly managing costs, smart founders will be rigorous, not ruthless Share on Facebook Share on X Share on LinkedIn Presented by Planful Founders have faced a rollercoaster ride over the past few years. First, the pandemic ground the economy to a halt. That was followed by a growth-at-all-costs era fueled by cheap money and historically low technical barriers to entry, and then a talent crisis hampered that growth. Most recently a funding crunch has made cash preservation paramount. With growing global instability and capital costs continuing to rise, founders must now figure out how to extend runways beyond 2024 or risk failure.
Most have made the necessary cuts to discretionary costs and unnecessary expenses by diligently analyzing the data. But this next round of cost-cutting needs a more strategic, thoughtful approach if you want your business to see 2025.
The founder’s trap Founders are dreamers and risk-takers, even if they don’t know it. I’ve often said that to do a startup one must have a healthy mix of naivety and arrogance, because statistically the vast majority of startups fail. Even for the founders that do eventually succeed, it’s often a brutal experience they have to go through (sprinkled with occasional fun).
The fundamental way the startup world functions, via pitches and funding rounds, creates a validation trap that many founders fall into. When securing a funding round ‘validates’ the founding team’s ideas and approach, then the funding itself becomes the primary goal. What separates successful founders is that they have the intellectual honesty to recognize that the odds are stacked against them, even the day after a new round closes.
Many economists, VCs and business leaders predict the economy may slow down even more in 2024, and at best may be similar to 2023. A grand vision for market domination might now take two, three, or even four years longer than what was planned just a few quarters ago. Smart founders are now hyper-focused on survivability, moderate growth and extending runways as far as possible because they must convince customers and investors that they have a plan to make it through.
Too many founders take a spreadsheet-based approach to cost-cutting. It’s simple math to calculate that, given your current burn rate, your bank account will dry up in X months. If you cut costs across the board by Y%, you just extended your runway by (1+Y) * X. Done.
But ruthless, thoughtless, blanket cost-cutting can anger customers, alienate workers and threaten the ability for a company to flourish once conditions improve.
Rigorous vs. ruthless Instead of ruthless, indiscriminate cost-cutting, it is wise to be very frugal about what doesn’t matter while you continue maintaining or even moderately investing in the things that do matter.
When making cuts, never lose sight of your people. They’re anxious about the future, and you can’t expect to add more stress and excessive demands to already-stressed workers. It might be a cliché, but I’ve found that “people first” really matters. I put a premium on company culture, especially during tough times. The outright elimination of things like team lunches, in-person meetings and little daily perks creates instant animosity.
Thoughtful cuts instead create visible and tangible reminders of the current environment, especially when considering how important in-person gatherings are to sustaining a robust culture in a remote work environment. Instead of quarterly in-person employee meetups, move to annual and replace the others with a DoorDash gift card and a video meeting. Curtailing all travel — both sales calls and team meetups — not only hurts morale, it allows justifiable excuses for missed targets, lost deals and churned customers. But tightening travel policies for everyone is a very visible and effective way to cut costs while setting the right tone.
Don’t underestimate the value in making these cuts visible. I make sure my kids see me turning off the lights they constantly leave on around the house. It’s not only because electricity costs money, it’s because leaving lights on wastes money. Everyone from the c-suite on down will recognize the value of what you’re doing and how these cutbacks extend the runway, but only if everyone sees it.
Bring the data I suggest looking beyond the data, not ignoring it. With today’s cloud-based finance, accounting, analysis and reporting tools, there is no excuse for not having a deft handle on every aspect of your business. As you search for areas to trim, the stories behind those numbers are more important. Collaboration and alignment across the business turns overall financial performance into a team sport, and data equips your team to play.
Here are four ways you should be using data to extend your runway: Move to zero-based budgeting (ZBB).
Typical budgeting begins with a number, and teams work to spend it all. It creates the wrong incentives, promotes empire creation where higher expectations garner more resources, and instills a use-it-or-lose it incentive where spending it all is more important than saving any. ZBB rewards frugality and results. Every team begins each quarter with zero and builds up from there, focusing on results today instead of targets down the road. Note, it’s much harder for larger companies (whom you are probably trying to disrupt) to do ZBB, so instilling this approach as a startup can give you meaningful long-term advantages.
Realign compensation.
Compensation is usually aligned with budget targets and spending goals. Zero-based budgeting lets you realign compensation with results to reward people for creating more value and ROI.
Use rolling forecasts.
A limited runway can be cut short if you don’t quickly adapt. Get out of that fiscal year box and increase agility by continuously adjusting to what’s happening. Instead of sticking to that static budget you created last year, rolling forecasts let you modify assumptions and adapt plans based on whatever comes your way.
Automate.
AI-enabled automation is going to change everything. Now is the time to dramatically increase your investment in automation to drive productivity. Cutting costs requires you to do more with less. Automation uses your data to increase efficiency and productivity, which you’ll need to survive the next few years.
Start with people Taking the “cut everything by 20%” approach to extend your runway is a flight plan for failure. Look around and start asking the right people the right questions. Give everyone ownership of trimming the total cost envelope. Find out what makes your company great, determine what matters and invest in it. Uncover what doesn’t matter, find the waste and cut it. Above all, explain ‘the why’, be transparent and be unapologetic. Your mission is the survival of your company, above all else. Later you can return to growth, and will probably be a better company for what you experienced in this survival phase.
What’s critical are your people. They’re the crew that’s going to get you up to speed on this shortened runway. If you start by cutting employee engagement, eliminating what defines your culture and ignoring the marketing and product investments that move the needle, key people will eject and you may not even make it to 2025.
Grant Halloran is Chief Executive Officer of Planful.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,053 | 2,023 |
"White House unveils AI.gov in a historic move towards comprehensive AI oversight | VentureBeat"
|
"https://venturebeat.com/ai/white-house-unveils-ai-gov-in-a-historic-move-towards-comprehensive-ai-oversight"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages White House unveils AI.gov in a historic move towards comprehensive AI oversight Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The White House on Monday unveiled AI.gov , a new website that showcases the federal government’s efforts and achievements in artificial intelligence (AI), in addition to providing resources and guidance for researchers, developers and the public.
The website, which was announced by President Biden in a press conference on Monday, is part of his administration’s broader strategy to advance the development and adoption of AI in the United States while ensuring its ethical and responsible use.
“We’re going to see more technological change in the next 10…maybe next 5 years, than we’ve seen in the last 50 years, and that’s a fact,” Biden said. “The most consequential technology of our time, artificial intelligence, is accelerating that change, and it’s going to accelerate it at warp speed.” The launch of AI.gov comes as the Biden administration issues its first-ever executive order on AI, requiring federal agencies to meet new standards for testing, evaluating and monitoring AI systems. Together, these represent the most significant actions to date by the U.S. government to harness AI responsibly.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A new era in AI governance The website will serve as the go-to resource for information on AI safety and security standards, civil rights guidance and labor market impacts. It also aims to streamline the recruitment process for AI positions within the federal government, signaling an earnest effort to cultivate a robust public sector AI workforce.
A key feature is the portal for the government’s new National AI Talent Surge , which aims to rapidly recruit technical experts to build and govern AI systems per the administration’s values.
The website also provides information on how the government is investing in AI research and development, such as through the National Artificial Intelligence Initiative (NAII) , which was established by an executive order signed by Biden in October. The NAII aims to coordinate and accelerate federal AI activities across agencies and sectors, as well as foster collaboration with academia, industry and civil society.
Additionally, the website offers guidance and best practices on how to implement and use AI in a trustworthy and ethical manner, such as through the National Artificial Intelligence Research Resource (NAIRR) , which was proposed by a bipartisan bill in June. The NAIRR would create a shared cloud computing platform that would provide access to high-quality data sets, computing resources, and educational tools for AI researchers and students.
White House stresses balanced approach The launch of AI.gov comes amid growing global competition and cooperation in AI, especially with China, which has declared its ambition to become the world leader in AI by 2030.
The website aims to demonstrate the U.S.’s commitment and leadership in advancing AI for the benefit of humanity, as well as its willingness to engage with other countries and international organizations on common challenges and opportunities.
The website also reflects the Biden administration’s recognition of the importance and urgency of developing a national AI strategy that is inclusive, transparent and accountable. As Biden said in his press conference: “One thing is clear: To realize the promise of AI and avoid the risk, we need to govern this technology. There’s no way around it, in my view. It must be governed.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,054 | 2,023 |
"Weav exits stealth with plug-and-play AI copilots for enterprises | VentureBeat"
|
"https://venturebeat.com/ai/weav-exits-stealth-with-plug-and-play-ai-copilots-for-enterprises"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Weav exits stealth with plug-and-play AI copilots for enterprises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
California-based Weav , a startup working to transform how companies build and use generative AI in their workflows, today came out of stealth with the launch of Enterprise AI Copilots — a suite of low-code, plug-and-play tools that can integrate different generative AI capabilities into existing systems and business processes.
The launch follows Weav’s seed funding round from Sierra Ventures. It aims to save enterprise teams from all the hassle of building and integrating AI into their systems, right from building and training a model to deploying and monitoring it.
“Business users should be able to initiate a use case and bring in the right data to activate AI at the right places and see results,” Weav CEO and co-founder Peeyush Rai told VentureBeat. “The key (here) is to build the right level of abstraction when designing the platform, which is what we have tried to do with our copilot approach.” A plug-and-play offering that cuts down the time and effort needed to integrate AI could be a game changer for teams looking to take advantage of the technology in their workflows, especially small and medium-sized ones (SMBs) that are often resource and staff-constrained.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How do Weav Enterprise AI Copilots work and help? With its copilots, Weav provides enterprises with three key things: ready-to-use generative AI capabilities, connectors to pull data from commonly used enterprise tools and an API that lets developers incorporate the capabilities into various enterprise workflows and applications.
Everything needed to keep the capabilities running, or the infrastructure stack, comes pre-integrated with the co-pilots, including integrations, prompt management, foundation models like GPT4 and LLama 2, vector databases and security and monitoring.
Currently, the company offers copilots for three key AI-driven functions: Document, Conversation and Search.
The Document copilot ingests unstructured data like documents, images, spreadsheets and JSONs, prepares that information and extracts key entities and values. This allows users to use natural language to search their docs, summarize them or define criteria to assess compliance.
The Conversation copilot goes a step ahead by allowing users to “converse” with their data in natural language. It understands users’ intent and performs the appropriate actions to get the job done.
Finally, the Search copilot allows contextual search across both unstructured and structured data sources using natural language and then translates the search into the appropriate native queries based on which data sources or repositories the information is found in.
When data is processed or a user initiates an action, the Copilots orchestrate multiple processes in the back-end, including applying guardrails to protect users and data, querying the embeddings in the vector databases, searching knowledge bases, or running a query on the database, and then composing the results to pass to the large language model (LLM) to generate a natural language response,” Rai noted. ” We are model agnostic. We have our own smaller models for specific tasks, and we can use any 3rd party LLM.” In most applications, he said, the copilots work together to deliver a seamless experience to users – as they extract value from their unstructured/structured data.
On the model side, the company currently offers support for OpenAI’s GPT-4 , GPT-3.5 and Llama 2 out of the box, with on-demand integrations for Anthropic’s Claude and Cohere’s various models.
Promising early results with adoption by big players Since the power of LLMs is known to almost every enterprise, it’s not hard to imagine how enterprises could be putting Weav’s copilots into use.
The company said its plug-and-play technology is being piloted by some of the largest companies in the world, including a multinational management consulting firm operating in over 40 countries, an F100 pharmaceutical conglomerate with globally distributed teams and one of the fastest-growing e-commerce platforms.
While the companies are still in the initial stages of implementation and use, Rai noted that early results show that the copilots have achieved result precision ranging from 87% to 95%, and productivity gains or cost reductions up to 75%.
Plan to stand out After the seed round in November 2022, Weav’s focus was on getting the platform ready for enterprise scale. Now, with the official launch of the copilots, the company is moving to build up its go-to-market and sales engines to rope in more customers.
Beyond this, Weav also plans to invest resources into expanding the set of models supported on the platform. It will develop some core algorithms as well as its multi-modal foundation model, enabling enterprises to do more with their unstructured data.
As the company moves ahead with its product, it recognizes that this will indeed turn out to be a competitive space.
Dataiku and Databricks are already helping enterprises with gen AI deployment and Rai expects that more companies will soon be jumping on the bandwagon.
“We see four developing trends in the ‘competitive’ landscape, broadly. First, we anticipate that big tech companies like Microsoft, Google and Amazon will sell Generative AI tooling and infrastructure into their existing accounts. Then, there are incumbent software companies that were using previous-generation technologies to build chatbots or narrow NLP models and new startups. Finally, we also anticipate internal IT organizations who may want to attempt to build it by themselves,” Rai said.
In this race, he said, the winning ones will be providing real business value to enterprises with the fastest time-to-value and the lowest cost of ownership (TCO) – which is exactly what Weav currently targets.
“Our promise to customers is to show initial value in 2-4 weeks and production deployments in 4-6 weeks. The speed to value is very important. These combinations of factors would differentiate us,” he added.
According to estimates from McKinsey , with generative AI’s implementation, retail and consumer packaged goods companies alone could see an additional $400 billion to $660 billion in operating profits annually. Across sectors, it has the potential to generate $2.6 trillion to $4.4 trillion in global corporate profits.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,055 | 2,023 |
"Virtual stores drive sales for 88% of retailers | VentureBeat"
|
"https://venturebeat.com/ai/virtual-stores-drive-sales-for-88-of-retailers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Virtual stores drive sales for 88% of retailers Share on Facebook Share on X Share on LinkedIn J.Crew virtual beach house experience, courtesy of Obsess Presented by Obsess There are nearly 27 million ecommerce stores worldwide today. Meanwhile, the average shopper’s attention span has dwindled to less than five seconds — muddling the purchase decision funnel in a highly saturated market. Retailers are eager to understand how to win the diminishing attention of consumers, and then turn that attention into sales.
Enter immersive experiences, a suite of interactive shopping solutions that have now reached scale and widespread adoption amongst consumer brands and retailers.
A recent Coresight report finds that more than half of brands and retailers are looking to invest in immersive experiences during the next three years, with each planning to spend an average of $234K+ on related technologies annually.
Coresight defines “immersive experiences” as related to any of the following digital shopping components: Virtual stores VR/AR-enabled virtual try-on Data/AI-enabled content for personalization Virtual events and fashion shows Social shopping Livestreaming Gamified shopping experiences Virtual styling services The company reports that immersive experiences will be a top-three priority investment area for brands and retailers during the next 12 months — preceded only by ecommerce and mobile website agility and marketing campaigns/influencer marketing. Within immersive experiences, data/AI-enabled content for personalization ranks as the top investment priority, followed by VR/AR-enabled try-on and virtual stores.
Obsess is the most scaled virtual platform in the industry, with over 300 immersive experiences in categories ranging from fashion and beauty, to CPG/FMCG, home and media — with the most data in the world on how consumers behave in 3D shopping experiences. The platform transforms the traditional 2D ecommerce thumbnail grid into a browser-based 3D, visual, engaging and social experience that integrates brand storytelling with next-generation commerce.
Coresight’s foremost investment priority ranking seems inevitable, as conversations around AI continue to flood retailers’ newsfeeds and investments prove valuable in generating returns. According to Coresight’s report: 77% of brands and retailers that have invested in data/AI-enabled content for personalization have seen increases in online sales. Fashion companies, in particular, report benefiting from increased customer acquisition and conversion as a result.
Obsess incorporates AI-driven personalization into immersive experiences, in order to give retailers a brand-safe channel to drive customer acquisition and conversion through more dynamic, meaningful content. Generative AI is leveraged to expedite the creation of virtual experiences, making the set-up process faster and more cost-effective, slashing go-to-market time from months to weeks. It is also used to elevate the user experience in immersive stores through the inclusion of generative content — including imagery, copy, gaming and quiz results, chatbots and virtual assistants.
For example: Babylist created an AI-enabled Baby Name Generator game in their Virtual Store, which accounts for an expected newborn’s birth date and gender in order to output a dynamic list of baby name recommendations.
The second and third immersive investment priorities for retailers — VR/AR-enabled try-on and virtual stores, respectively — give way to a rising trend in virtual technologies as a tool to shrink the traditional purchase funnel. On average, it takes five to eight touchpoints with a retailer for a consumer to make a purchase decision. Of course, for luxury and high-priced brands, this cycle is even longer. Brands and retailers are now using virtual channels to drive ecommerce customer acquisition, product page clicks and conversion all in a single shopping environment, in order to reduce the number of touchpoints in the customer journey and increase efficiencies in marketing.
According to Coresight, 88% of retail decision-makers that have invested in virtual stores have reported an uplift in sales, following the launch of a virtual store. Likewise, 67% have seen an increase in new customers and 77% have seen an increase in clicks to product pages. This is due to the engaging, relevant, highly memorable nature of immersive experiences.
Brands that work with Obsess see up to 10X higher session times in their virtual stores, compared to ecommerce. Unlike traditional ecommerce stores, which are built for directed shopping, virtual stores are built to encourage customers to browse and discover, and spend time engaging with branded content in a CGI-rendered environment. Within a single virtual store, brands can tell their stories through gaming , quizzes and interactive media while simultaneously showcasing products through user-activated virtual try-on, product customizers and 3D look-builders. Virtual stores are also fully integrated with ecommerce platforms like Shopify and Salesforce, enabling consumers to seamlessly add to cart and checkout at any point.
Within Obsess virtual stores, companies have seen direct success including: A Luxury Fashion Brand, who saw a 75% higher conversion rate in their virtual store compared to traditional e-commerce A Prestige Beauty Brand, who saw a 35% increase in AOV in their virtual store compared to traditional e-commerce A Global Cosmetics Brand, who saw a 109% increase in time spent in their virtual store compared to traditional e-commerce, and consequently—a 112% increase in checkouts from virtual store visitors Immersive experiences will define the next generation of shopping, and Coresight’s latest report reaffirms how rapidly the adoption is already growing. By adding experiential components into ecommerce, such as virtual stores and AI-enabled content for personalization, brands and retailers can unlock access to the new generation of gaming-native consumers, and convert them into loyal customers. Retailers need to be conscious that 55% of their competitors will “definitely” increase investment in immersive experiences during the next three years, and 86% will do so before 2033.
Looking ahead, immersive technologies are poised to continue evolving at a rapid clip, especially as processors become increasingly powerful and network speeds get faster around the globe. 3D digital interfaces — which are, by nature, more intuitive and aligned to the way that humans interact in real life — will extend into every part of the internet, well beyond just shopping. Apple’s recent announcement of its new Vision Pro spatial computer, for example, will bring 3D into the mass market over the next five years. Immersive devices and apps will scale to mass adoption during this time as they will provide a more simplified, natural and human interface to technology.
Neha Singh is the CEO & Founder of Obsess, an experiential ecommerce platform enabling brands and retailers to create visual, immersive, 3D virtual stores. She was previously the Head of Product at Vogue, where she was responsible for the product strategy and technology execution of Vogue’s digital business including content products, ad products and distribution platforms. Prior to that, Neha was VP of Product and Engineering at AHAlife, an ecommerce startup for luxury lifestyle products. Neha began her career at Google, where she was a Software Engineer and Tech Lead for 5 years and worked on Google AdWords and Google News. She holds an undergraduate Computer Science degree from The University of Texas at Austin and a graduate Computer Science degree from MIT.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,056 | 2,023 |
"VentureBeat launches a GenAI tour for 2024 | VentureBeat"
|
"https://venturebeat.com/ai/venturebeat-launches-a-genai-tour-for-2024"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event VentureBeat launches a GenAI tour for 2024 Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
VentureBeat, a publication leading coverage of generative AI news, is launching an event tour of major cities in the U.S.
– including one near you – that explores how to best put generative AI to work in your business.
VentureBeat’s editorial team will curate a series of exclusive salon events that will be taken on the road, in close partnership with leading influencers and practitioners in enterprise AI, and with support from sponsors that are leading enterprise providers in the area.
Called the AI Impact Tour, the salon series will bring together AI decision makers from key industries around the hottest enterprise topics in eight cities across the U.S.
The first salon event is set for Jan 10 in San Francisco ( request an invite ) and focuses on “Getting to an AI governance blueprint.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Governance has taken center stage again lately, in the face of powerful new capabilities of large language models (LLMs), including autonomous AI that can run without human intervention.
At this event, we’ll debate the leading frameworks organizations can implement to oversee, regulate and guide their AI projects. We’ll moderate a conversation with a special speaker, to be announced, who will showcase how their business has successfully adopted an AI governance model.
Subsequent dates and cities will be announced soon, so stay tuned. Each event will include networking, so that you can meet influencers in your industry as well as generative AI enablers. Some events in the series will include curated talks around specific verticals, such as finance, health, pharma, cybersecurity, technology and more.
We believe generative AI is about to revolutionize almost every domain, function and workflow of the enterprise, but that targeted conversations are necessary for audiences to understand how to stay in front of this change. The pace of change has been so swift, and the conversation is so early that we’ve yet to see nuanced, credible conversations being hosted by independent parties about enterprise generative AI in even the leading hubs like SF Bay Area and New York City, not to mention the other tier one cities like Chicago, Los Angeles, and more.
Moreover, we believe it’s imperative that companies keep pushing ahead aggressively to experiment and deploy LLM technology in order to stay competitive, given its transformative power. But at the same time, companies should make a serious commitment to ensuring that it’s safely deployed. This two-pronged approach – of attention to capability, but also to safety and governance – is critical.
The events will conclude with breakout roundtables where attendees can join small group discussions on topics related to ChatGPT and generative AI. The roundtables will be moderated by VentureBeat’s editors and reporters who cover the latest trends and developments in AI.
The AI Impact Tour salons will be open to 100 attendees each, and you’ll want to reserve your spot early to gain insights into your industry and meet key decision makers and VB’s writers covering this space.
Request for an invitation to the AI Impact Tour today. Don’t miss this opportunity to be part of the future, where industry leaders shape the next wave of AI innovation.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,057 | 2,023 |
"G7 introduces voluntary AI code of conduct | VentureBeat"
|
"https://venturebeat.com/ai/to-promote-safe-secure-trustworthy-ai-g7-introduces-voluntary-code-of-conduct"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages G7 introduces voluntary AI code of conduct Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Global government leaders are continuing to make it clear that they are taking AI’s risks and opportunities seriously.
Today in the most recent government action around the evolving technology, the group of seven industrial countries (G7) announced the International Code of Conduct for Organizations Developing Advanced AI Systems. The voluntary guidance, building on a “Hiroshima AI Process” announced in May, aims to promote safe, secure, trustworthy AI.
The announcement comes on the same day that U.S. President Joe Biden issued an Executive Order on “Safe, Secure and Trustworthy Artificial Intelligence.” It also comes as the E.U. is finalizing its financially binding EU AI Act and follows the U.N. Secretary-General’s recent creation of a new Artificial Intelligence Advisory Board.
Composed of more than three dozen global government, technology and academic leaders, the body will support the international community’s efforts to govern the evolving technology.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We… stress the innovative opportunities and transformative potential of advanced AI systems, in particular, foundation models and generative AI,” the G7 said in a statement issued today.
“We also recognize the need to manage risks and to protect individuals, society and our shared principles, including the rule of law and democratic values, keeping humankind at the center.” Leaders assert that meeting such challenges requires “shaping an inclusive governance” for AI.
An extensive 11-point framework The G7 — consisting of the U.S., E.U., Britain, Canada, France, Germany, Italy and Japan — released the new 11-point framework to help guide developers in responsible AI creation and deployment.
The group of global leaders called on organizations to commit to the code of conduct, while acknowledging that “different jurisdictions may take their own unique approaches to implementing these guiding principles.” The 11 points include: – Take appropriate measures throughout development to identify, evaluate and mitigate risks. This can include red-teaming and testing and mitigation to ensure trustworthiness, safety and security. Developers should enable traceability with datasets, processes and decisions.
– Identify and mitigate vulnerabilities and incidents and patterns of misuse after deployment. This can include monitoring for vulnerabilities, incidents and emerging risks and facilitating third-party and user discovery and incident reporting.
– Publicly report advanced AI systems’ capabilities , limitations and domains of appropriate and inappropriate use. This should include transparency reporting that is supported by “robust documentation processes.” – Work towards responsible information-sharing and reporting of incidents. This can include evaluation reports, information on security and safety risks, intended or unintended capabilities and attempts to circumvent safeguards.
– Develop, implement and disclose AI governance and risk management policies. This applies to personal data, prompts and outputs.
– Invest in and implement security controls including physical security, cybersecurity and insider threat safeguards. This may include securing model weights and algorithms, servers and datasets, including operational security measures and cyber/physical access controls.
– Develop and deploy reliable content authentication and provenance mechanisms such as watermarking. Provenance data should include an identifier of the service or model that created the content and disclaimers should also inform users that they are interacting with an AI system.
– Prioritize research to mitigate societal, safety and security risks. This can include conducting, collaborating on and investing in research and developing mitigation tools.
– Prioritize the development of AI systems to address “the world’s greatest challenges,” including the climate crisis, global health and education. Organizations should also support digital literacy initiatives.
– Advance the development and adoption of international technical standards. This includes contributing to the development and use of international technical standards and best practices.
– Implement appropriate data input measures and protections for personal data and intellectual property. This should include appropriate transparency of training datasets.
A ‘non-exhaustive’ living document The G7 emphasized that AI organizations must respect the rule of law, human rights, due process, diversity, fairness and non-discrimination, democracy, and “humancentricity.” Advanced systems should not be introduced in a way that is harmful, undermines democratic values, facilitates terrorism, enables criminal misuse, “or poses substantial risks to safety, security and human rights.” The group also committed to introducing monitoring tools and mechanisms to hold organizations accountable.
To ensure that it remains “fit for purpose and responsive,” the code of conduct will be updated as necessary based on input from government, academia and the private sector. The list of “non-exhaustive” principles will be “discussed and elaborated as a living document.” The G7 leaders further assert that their efforts are intended to foster an environment where AI benefits are maximized while mitigating its “risks for the common good worldwide.” This should include developing and emerging economies “with a view of closing digital divides and achieving digital inclusion.” Support from fellow global leaders The code of conduct received approval from other global government officials, including Věra Jourová, the European Commission’s Vice President for Values and Transparency “Trustworthy, ethical, safe and secure, this is the generative artificial intelligence we want and need,” Jourová said in a statement. With the Code of Conduct, “the EU and our like-minded partners can lead the way in making sure AI brings benefits while addressing its risks.” European Commission President Ursula von der Leyen, for her part, said that “the potential benefits of artificial intelligence for citizens and the economy are huge. However, the acceleration in the capacity of AI also brings new challenges. I call on AI developers to sign and implement this Code of Conduct as soon as possible.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,058 | 2,023 |
"This week in data: Generative AI spending and top questions the best CEOs ask | VentureBeat"
|
"https://venturebeat.com/ai/this-week-in-data-generative-ai-spending-and-top-questions-the-best-ceos-ask"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: Generative AI spending and top questions the best CEOs ask Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
IDC predicts that generative AI investments will grow by nearly tenfold over the next 4 years; a new study shows little correlation between where you studied or worked and your ability to start an unicorn; McKinsey breaks down the questions that the best CEOs know to ask and solve for.
These are some of the topics we’re going to cover in this week’s carcast.
1) Gen AI predictions: A few weeks ago, I asked how big your 2024 gen AI budget should be in relation to your traditional AI budget. Close to 30% of you haven’t been able to answer that question. Well, IDC’s Ritu Jyoti and Rick Villars just published research that might help you.
2) The questions great CEOs ask: McKinsey produces great research on what makes a CEO great. What I’ve learned working side by side with many of them is that the difference between the best and the rest is the quality of their questions.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 3) Where do unicorn founders come rom? It turns out that what unicorn founders have in common might actually surprise you….and the fact that they studied at an elite university or worked at an elite company is in fact not a leading factor.
This week’s CarCast also includes extras including a new study on what’s required to be a public company. Enjoy! Bruno Aziza is a technology entrepreneur and partner at CapitalG , Alphabet’s independent growth fund.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,059 | 2,023 |
"The ‘World Cup' of AI policy: will USA win? | the AI Beat | VentureBeat"
|
"https://venturebeat.com/ai/the-world-cup-of-ai-policy-will-usa-win-the-ai-beat"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The ‘World Cup’ of AI policy: will USA win? | the AI Beat Share on Facebook Share on X Share on LinkedIn Image by DALL-E 3 for VentureBeat Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
After the White House released its long-awaited, 100+ page Executive Order on “Safe, Secure, and Trustworthy” AI yesterday — and my inbox practically exploded with offers of post-game analysis — I asked Merve Hickok, president of the independent nonprofit Center for AI and Digital Policy, about its timing. After all, the AI Executive Order came just two days before Vice President Kamala Harris crosses the pond to attend the highly-anticipated UK AI Safety Summit ; the same day the G7 introduced a voluntary AI code of conduct; and just as the final negotiations around the EU AI Act are in “touching distance” of the finish line.
Hickok laughed and said that she and her colleagues have begun to call this week the “World Cup” of AI policy. “There are multiple big-ticket happenings this week, so if you’re involved in those ‘World Cup’ conversations, it’s keeping you busy.” AI policy is no game, of course — but there is no doubt that there is stiff competition to show global leadership on AI regulation. The question is, will the USA win? And can it attract the talent equivalent of soccer/football superstars Cristiano Ronaldo or Lionel Messi to do it? According to Hickok, the answer is yes — and, she added, “we should.” The US, she explained, needs “models based on human rights and democratic values.” That has always been the expectation from the US, she added, but pointed out that it has been lacking — something she said she called out in her congressional testimony.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Therefore, it’s great to see us now making human rights and democratic values a guiding north star,” she said about Biden’s AI Executive Order “That’s what we need in global AI governance. This is not a national thing. AI does not sit within borders.” US positioning itself as global AI policy leader While not everyone might agree with Hickok’s point of view on US priorities for AI regulation, others told me that there is no doubt the US is clearly positioning itself to lead in global AI policy.
“Today’s Executive Order signals the Biden Administration’s determination to translate policy into practice now and to play a leadership role in AI governance globally,” said Caitlin Fennessy, vice president and chief knowledge officer of the International Association of Privacy Professionals ( IAPP ).
Florian Douetteau, co-founder and CEO of AI unicorn Dataiku, said in an email that while the EU leans towards stricter AI regulation, the US is striking a balance between innovation and responsible usage. “This approach not only ensures the safe and ethical development of AI, but also positions the US as a leader in the global AI arena, fostering innovation while safeguarding public interests,” he said.
And Aya Ibrahim, former senior advisor to the director of the White House Office of Science and Technology Policy, pointed out that “global leadership on AI begins with the US getting its own house in order first, and this executive order is a major step in that direction.” Other countries ‘gaining on us’ On the AI policy front, Team USA is also notably concerned with maintaining its technology innovation lead as other countries — particularly China (For example, the Biden administration announced two weeks ago that it is reducing the types of semiconductors that American companies will be able to sell to China).
Even Senate Majority Chuck Schumer (D-NY) is feeling the need for World Cup-level speed. President Biden’s AI Executive Order, he said in a statement yesterday, is a “crucial step” to ensure that the US remains the leader of AI innovation and “can harness this awesome technology for good.” “This is a massive step forward, but of course more is needed. All executive orders are limited in what they can do, so it is now on Congress to augment, expand, and cement this massive start with legislation,” he said. “Congress must now act with urgency and humility. Urgency, because we can’t wait while other countries are gaining on us and humility because the task of ensuring sustained investment to advance AI innovation and setting common-sense guardrails is a powerful and challenging one.” ‘Shooting ourselves in the foot’ Not surprisingly, however, some policy-watchers have a completely different take on the ‘World Cup’ of AI policy — that the AI Executive Order will make US global AI leadership goals even more difficult.
Adam Thierer, a senior fellow for the R Street Institute’s technology and innovation team, said that the EO — and the Biden Administration’s negotiations with the UK at the AI Safety Summit — is “shooting ourselves in the foot as the race gets underway,” “While some will appreciate the whole-of-government approach to AI required by the order, if taken too far, unilateral and heavy-handed administrative meddling in AI markets could undermine America’s global competitiveness and even the nation’s geopolitical security,” Thierer wrote in a blog post. “Excessive preemptive regulation of AI systems could impede the growth of these technologies or limit their potential in various ways.” To win the World Cup of AI policy, be a goldfish The truth is, winning the World Cup of AI policy won’t be an easy feat. Just look at the AI Executive Order: The New York Times quoted Sarah Kreps, a professor at the Tech Policy Institute at Cornell University, as saying that any of the directives in the order will be difficult to carry out, including rapid hiring of AI experts in government and passing legislation.
“ It’s calling for a lot of action that’s not likely to receive a response,” Ms. Kreps said.
But while mistakes may be made along the way, it will be important not to linger on them and move on to the next AI policy challenge. As Ted Lasso would say, “ be a goldfish ” — and if AI leaders need some feedback on policy as they vie for the World Cup of AI Policy, there’s always an emergency meeting of the Diamond Dogs.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,060 | 2,023 |
"Nvidia's NeMo taps generative AI in designing semiconductor chips | VentureBeat"
|
"https://venturebeat.com/ai/nvidias-nemo-taps-generative-ai-in-designing-semiconductor-chips"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia’s NeMo taps generative AI in designing semiconductor chips Share on Facebook Share on X Share on LinkedIn Nvidia's NeMo project Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
In a research paper released today, Nvidia semiconductor engineers showcased how generative artificial intelligence (AI) can assist in the complex process of designing semiconductors.
The study demonstrated how specialized industries can leverage large language models (LLMs) trained on internal data to create assistants that enhance productivity.
The research, utilizing Nvidia NeMo, highlights the potential for customized AI models to provide a competitive edge in the semiconductor field.
Semiconductor design is a highly challenging endeavor, involving the meticulous construction of chips containing billions of transistors on 3D circuitry maps that are like city streets — but thinner than a human hair.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! It requires the coordination of multiple engineering teams over a span of years. Each team specializes in different aspects of chip design, employing specific methods, software programs, and computer languages.
Mark Ren, an Nvidia Research director, was the lead author of the paper.
“I believe over time large language models will help all the processes, across the board,” Ren said in a statement.
The paper was announced by Bill Dally, Nvidia’s chief scientist, during a keynote at the International Conference on Computer-Aided Design held in San Francisco.
“This effort marks an important first step in applying LLMs to the complex work of designing semiconductors,” said Dally, in a statement. “It shows how even highly specialized fields can use their internal data to train useful generative AI models.” The research team at Nvidia developed a custom LLM called ChipNeMo, trained on the company’s internal data, to generate and optimize software and assist human designers. The long-term goal is to apply generative AI to every stage of chip design, leading to substantial gains in overall productivity. The initial use cases explored by the team include a chatbot, a code generator, and an analysis tool.
The most well-received use case thus far is an analysis tool that automates the time-consuming task of maintaining updated bug descriptions. And a prototype chatbot that helps engineers find technical documents quickly and a code generator that creates snippets of specialized software for chip designs are also under development.
The research paper focuses on the team’s efforts to gather design data and create a specialized generative AI model. This process can be applied to any industry. The team started with a foundation model and used Nvidia NeMo, a framework for building, customizing, and deploying generative AI models, to refine the model. The final ChipNeMo model, with 43 billion parameters and trained on over a trillion tokens, demonstrated its capability to understand patterns.
The study serves as an example of how a deeply technical team can refine a pretrained model with its own data. It highlights the importance of customizing LLMs, as even models with fewer parameters can match or exceed the performance of larger general-purpose LLMs. Careful data collection and cleaning are crucial during the training process, and users are advised to stay updated on the latest tools that can simplify and expedite their work.
The semiconductor industry is just beginning to explore the possibilities of generative AI, and this research provides valuable insights. Enterprises interested in building their own custom LLMs can utilize the NeMo framework, which is available on GitHub and the Nvidia NGC catalog, Nvidia said.
The paper has a lot of names on it: Mingjie Liu, Teo Ene, Robert Kirby, Chris Cheng, Nathaniel Pinckney, Rongjian Liang, Jonah Alben, Himyanshu Anand, Sanmitra Banerjee, Ismet Bayraktaroglu, Bonita Bhaskaran Bryan Catanzaro, Arjun Chaudhuri, Sharon Clay, Bill Dally, Laura Dang, Parikshit Deshpande Siddhanth Dhodhi, Sameer Halepete, Eric Hill, Jiashang Hu, Sumit Jain, Brucek Khailany Kishor Kunal, Xiaowei Li, Hao Liu, Stuart Oberman, Sujeet Omar, Sreedhar Pratty, Ambar Sarkar Zhengjiang Shao, Hanfei Sun, Pratik P Suthar, Varun Tej, Kaizhe Xu and Haoxing Ren.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,061 | 2,023 |
"MIT’s copilot system can set the stage for a new wave of AI innovation | VentureBeat"
|
"https://venturebeat.com/ai/mit-copilot-system-can-set-the-stage-for-a-new-wave-of-ai-innovation"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MIT’s copilot system can set the stage for a new wave of AI innovation Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
MIT scientists have developed a deep learning system, Air-Guardian , designed to work in tandem with airplane pilots to enhance flight safety. This artificial intelligence (AI) copilot can detect when a human pilot overlooks a critical situation and intervene to prevent potential incidents.
The backbone of Air-Guardian is a novel deep learning system known as Liquid Neural Networks (LNN), developed by the MIT Computer Science and Artificial Intelligence Lab (CSAIL). LNNs have already demonstrated their effectiveness in various fields. Their potential impact is significant, particularly in areas that require compute-efficient and explainable AI systems, where they might be a viable alternative to current popular deep learning models.
Tracking attention Air-Guardian employs a unique method to enhance flight safety. It monitors both the human pilot’s attention and the AI’s focus, identifying instances where the two do not align. If the human pilot overlooks a critical aspect, the AI system steps in and takes control of that particular flight element.
This human-in-the-loop system is designed to maintain the pilot’s control while allowing the AI to fill in gaps. “The idea is to design systems that can collaborate with humans. In cases when humans face challenges in order to take control of something, the AI can help. And for things that humans are good at, the humans can keep doing it,” said Ramin Hasani, AI scientist at MIT CSAIL and co-author of the Air-Guardian paper.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For instance, when an airplane is flying close to the ground, the gravitational force can be unpredictable, potentially causing the pilot to lose consciousness. In such scenarios, Air-Guardian can take over to prevent incidents. In other situations, the human pilot might be overwhelmed with excessive information displayed on the screens. Here, the AI can sift through the data, highlighting critical information that the pilot might have missed.
Air-Guardian uses eye-tracking technology to monitor human attention, while heatmaps are used to indicate where the AI system’s attention is directed. When a divergence between the two is detected, Air-Guardian evaluates whether the AI has identified an issue that requires immediate attention.
AI for safety-critical systems Air-Guardian, like many other control systems, is built upon a deep reinforcement learning model. This model involves an AI agent, powered by a neural network, that takes actions based on environmental observations. The agent is rewarded for each correct action, enabling the neural network to gradually learn a policy that guides it to make the right decisions in given situations.
What sets Air-Guardian apart, however, is the LNN at its core. LNNs are known for their explainability, a feature that allows engineers to delve into the model’s decision-making process. This stands in stark contrast to traditional deep learning systems, often referred to as “black boxes” due to their inscrutable nature.
“For safety-critical applications, you can’t use normal black boxes because you need to understand the system before you can use it. You want to have a degree of explainability for your system,” Hasani said.
Hasani was part of a team that began research on LNNs in 2020. In 2022, their work on an efficient drone control system, based on LNNs, was featured on the cover of Science Robotics.
Now, they are taking strides to bring this technology into practical applications.
Another significant attribute of LNNs is their ability to learn causal relationships within their data. Traditional neural networks often learn incorrect or superficial correlations in their data, leading to unexpected errors when deployed in real-world settings. LNNs, on the other hand, can interact with their data to test counterfactual scenarios and learn cause-and-effect relationships, making them more robust in real-world settings.
“If you want to learn the true objective of the task, you cannot just learn the statistical features from the vision input that you’re getting. You have to learn cause and effect,” Hasani said.
AI for the edge Liquid Neural Networks offer another significant advantage: their compactness. Unlike traditional deep learning networks, LNNs can learn complex tasks using far fewer computational units or “neurons.” This compactness allows them to operate on computers with limited processing power and memory.
“Today, in AI systems, we see that as we scale them up, they become more and more powerful and can do like many more tasks. But one of the problems is that you cannot deploy them on an edge device,” Hasani said.
In a previous study, the MIT CSAIL team demonstrated that an LNN with just 19 neurons could learn a task that would typically require 100,000 neurons in a classic deep neural network. This compactness is particularly crucial for edge computing applications, such as self-driving cars, drones, robots and aviation. In these scenarios, the AI system must make real-time decisions and cannot rely on cloud-based models.
“The compactness of liquid neural networks is definitely helpful because you don’t have an infinite amount of compute on these cars or airplanes and edge devices,” Hasani said.
Broader applications of Air-Guardian and LNNs Hasani believes that the insights gained from the development of Air-Guardian can be applied to a multitude of scenarios where AI assistants must collaborate with humans. This could be simple scenarios, such as accomplishing tasks across several applications or complex tasks like automated surgery and autonomous driving where human and AI interaction is constant.
“You can generalize these applications across many disciplines,” Hasani said.
LNNs could also contribute to the burgeoning trend of autonomous agents, a field that has seen significant growth with the rise of large language models. LNNs could power AI agents such as virtual CEOs, capable of making and explaining decisions to their human counterparts, aligning their values and agendas with those of humans.
“Liquid neural networks are universal signal processing systems. It doesn’t matter what kind of input data you’re serving, whether it’s video, audio, text, financial time series, medical time series, user behavior,” Hasani said. “Anything that has some notion of sequentiality can go inside the liquid neural network and the universal signal processing system can create different models. The applications can range from predictive modeling to time series to autonomy to generative AI applications.” Hasani likens the current state of LNNs to the year 2016, just before the influential “transformer” paper was published. Transformers, built on years of prior research, eventually became the backbone of large language models like ChatGPT. Today, we are at the dawn of what can be achieved with LNNs, which could potentially bring powerful AI systems to edge devices such as smartphones and personal computers.
“This is a new foundation model,” Hasani asserts. “A new wave of AI systems can be built on top of it.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,062 | 2,023 |
"Midjourney, Stability AI and DeviantArt score in copyright case | VentureBeat"
|
"https://venturebeat.com/ai/midjourney-stability-ai-and-deviantart-win-a-victory-in-copyright-case-by-artists-but-the-fight-continues"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Midjourney, Stability AI and DeviantArt win a victory in copyright case by artists — but the fight continues Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The contentious issue of whether AI art generators violent copyright — since they are by and large trained on human artists’ work, in many cases without their direct affirmative consent, compensation, or even knowledge — has taken a step forward to being settled in the U.S. today.
U.S. District Court Judge William H. Orrick, of the Northern District of California, today filed a decision in a copyright infringement class action lawsuit brought against Stability AI (creator of the popular open-source Stable Diffusion text-to-image AI generator), Midjourney (another AI image generator based on Stable Diffusion) and popular image sharing service and social network DeviantArt (which released its own AI image generator based on Stable Diffusion, “DreamUp” back in late 2022). The lawsuit was filed by three artists —Sarah Anderson, Kelly McKernan, and Karla Ortiz.
Full disclosure: VentureBeat regularly uses Midjourney, Stable Diffusion, and other AI art image generators to create article header art and other art for our digital presence.
Motion to dismiss ‘largely granted’ The three AI image generator companies had filed a motion to dismiss the copyright infringement case against them by the artists, and today Judge Orrick largely granted it, writing “the Complaint is defective in numerous respects.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Orrick spends the rest of his ruling explaining why he found the artists’ complaint defective, which includes various issues, but the big one being that two of the artists — McKernan and Ortiz, did not actually file copyrights on their art with the U.S. Copyright Office.
Also, Anderson copyrighted only 16 of the hundreds of works cited in the artists’ complaint. The artists had asserted that some of their images were included in the Large-scale Artificial Intelligence Open Network ( LAION ) open-source database of billions of images created by computer scientist/machine learning (ML) researcher Christoph Schuhmann and collaborators, which all three AI art generator programs used to train.
Roar like a LAION The size of the LAION database may help protect the AI companies, as Orrick writes: “The other problem for plaintiffs is that it is simply not plausible that every Training Image used to train Stable Diffusion was copyrighted (as opposed to copyrightable), or that all DeviantArt users’ Output Images rely upon (theoretically) copyrighted Training Images, and therefore all Output images are derivative images.
Even if that clarity is provided and even if plaintiffs narrow their allegations to limit them to Output Images that draw upon Training Images based upon copyrighted images, I am not convinced that copyright claims based a derivative theory can survive absent ‘substantial similarity’ type allegations. The cases plaintiffs rely on appear to recognize that the alleged infringer’s derivative work must still bear some similarity to the original work or contain the protected elements of the original work.” In other words — because AI image generators reference art by many different artists when generating new imagery, unless it is possible to prove that the resulting image referenced solely or primarily copyrighted art, and is substantially similar to that original copyrighted work, it is likely not infringing of the original work.
The fight continues… Yet, Orrick does invite the artists to amend their claims and refile a narrower lawsuit citing specifically infringed copyrighted images.
The judge also allowed one count — for direct copyright infringement against Stability AI for copying Anderson’s 16 copyrighted works without authorization — to move forward. Read the full ruling document below (via Aaron Moss ): VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,063 | 2,023 |
"How can AI better understand humans? Simple: ask us questions | VentureBeat"
|
"https://venturebeat.com/ai/how-can-ai-better-understand-humans-simple-by-asking-us-questions"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How can AI better understand humans? Simple: by asking us questions Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Anyone who has dealt in a customer-facing job — or even just worked with a team of more than a few individuals — knows that every person on Earth has their own unique, sometimes baffling, preferences.
Understanding the preferences of every individual is difficult even for us fellow humans. But what about for AI models, which have no direct human experience upon which to draw, let alone use as a frame-of-reference to apply to others when trying to understand what they want? A team of researchers from leading institutions and the startup Anthropic , the company behind the large language model (LLM)/chatbot Claude 2 , is working on this very problem and has come up with a seemingly obvious solution: get AI models to ask more questions of users to find out what they really want.
Entering a new world of AI understanding through GATE Anthropic researcher Alex Tamkin, together with colleagues Belinda Z. Li and Jacob Andreas of the Massachusetts Institute of Technology’s (MIT’s) Computer Science and Artificial Intelligence Laboratory (CSAIL ), along with Noah Goodman of Stanford, published a research paper earlier this month on their method, which they call “generative active task elicitation (GATE).” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Their goal? “Use [large language] models themselves to help convert human preferences into automated decision-making systems” In other words: take an LLM’s existing capability to analyze and generate text and use it to ask written questions of the user on their first interaction with the LLM. The LLM will then read and incorporate the user’s answers into its generations going forward, live on the fly, and (this is important) infer from those answers — based on what other words and concepts they are related to in the LLM’s database — as to what the user is ultimately asking for.
As the researchers write: “The effectiveness of language models (LMs) for understanding and producing free-form text suggests that they may be capable of eliciting and understanding user preferences.” The three GATES The method can be applied in various ways, according to the researchers: Generative active learning: The researchers describe this method as the LLM producing examples of the kind of responses it can deliver and asking how the user likes them. One example question they provide for an LLM to ask is: “Are you interested in the following article? The Art of Fusion Cuisine: Mixing Cultures and Flavors […] .” Based on what the user responds, the LLM will deliver more or less content along those lines.
Yes/no question generation: This method is as simple as it sounds (and gets). The LLM will ask binary yes or no questions such as: “Do you enjoy reading articles about health and wellness?” and then take into account the user’s answers when responding going forward, avoiding information that it associates with those questions that received a “no” answer.
Open-ended questions: Similar to the first method, but even broader. As the researchers write, the LLM will seek to obtain “the broadest and most abstract pieces of knowledge” from the user, including questions such as “What hobbies or activities do you enjoy in your free time […], and why do these hobbies or activities captivate you?” Promising results The researchers tried out the GATE method in three domains — content recommendation, moral reasoning and email validation.
By fine-tuning Anthropic rival’s GPT-4 from OpenAI and recruiting 388 paid participants at $12 per hour to answer questions from GPT-4 and grade its responses, the researchers discovered GATE often yields more accurate models than baselines while requiring comparable or less mental effort from users.
Specifically, they discovered that the GPT-4 fine-tuned with GATE did a better job at guessing each user’s individual preferences in its responses by about 0.05 points of significance when subjectively measured, which sounds like a small amount but is actually a lot when starting from zero, as the researchers’ scale does.
Ultimately, the researchers state that they “presented initial evidence that LMs can successfully implement GATE to elicit human preferences (sometimes) more accurately and with less effort than supervised learning, active learning, or prompting-based approaches.” This could save enterprise software developers a lot of time when booting up LLM-powered chatbots for customer or employee-facing applications. Instead of training them on a corpus of data and trying to use that to ascertain individual customer preferences, fine-tuning their preferred models to perform the Q/A dance specified above could make it easier for them to craft engaging, positive, and helpful experiences for their intended users.
So, if your favorite AI chatbot of choice begins asking you questions about your preferences in the near future, there’s a good chance it may be using the GATE method to try and give you better responses going forward.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,064 | 2,023 |
"Forrester's 2024 Predictions Report warns of AI 'shadow pandemic' as employees adopt unauthorized tools | VentureBeat"
|
"https://venturebeat.com/ai/forresters-2024-predictions-report-warns-of-ai-shadow-pandemic-as-employees-adopt-unauthorized-tools"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Forrester’s 2024 Predictions Report warns of AI ‘shadow pandemic’ as employees adopt unauthorized tools Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Forrester Research unveiled its highly anticipated 2024 predictions report last week, charting a course for more measured AI growth while warning business leaders to prepare for rampant “shadow usage” as employees rely on their own AI tools to be productive.
The 38-page report sees AI platform budgets tripling in 2024 as companies invest in scalable solutions to build, deploy and monitor AI models. However, Forrester cautions this won’t be enough to satisfy employee demand. The report predicts 60% of employees will use their own AI tools at work, introducing new regulatory and compliance challenges.
Forrester sees 85% of companies expanding AI capabilities with open-source models like GPT-J and BERT rather than relying solely on popular proprietary choices like ChatGPT. It also expects 40% of enterprises to proactively invest in governance for AI compliance, getting ahead of looming regulations in the E.U., U.S. and China.
Open-source models and risk management On the innovation front, Forrester predicts a major insurer will begin offering AI “hallucination insurance” in 2024, covering errors and harms specifically caused by AI mistakes as the technology proliferates.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Forrester’s 2024 Predictions Report overall caps a wild year that saw generative AI explode in popularity among consumers and employees, putting pressure on CIOs and CDOs to deliver business results.
The report strikes a pragmatic tone, warning leaders not to get distracted by AI hype and “fun and games” as they build strategies to capitalize on AI’s emerging potential.
Forrester sees 2024 as the start of the “era of intentional AI” as companies move the technology out of R&D and into productive business applications.
“Without a doubt, 2023 will go down as the year of AI as we saw the rise of consumerization of generative AI. Announcements rolled out about mergers, acquisitions, wild company valuations, and oversized VC and internal investments,” Forrester’s team of analysts say in the report.
“We predict that 2024 will galvanize enterprise teams to be proactive, develop a meaningful AI strategy, and deliver on the AI promise while keeping an eye on regulations and new risks,” they add.
Here are some of the biggest takeaways from the Forrester 2024 AI predictions report: 60% of employees will use their own AI tools at work, creating security risks.
AI platform budgets will triple as demand for AI capabilities spikes.
85% of companies will incorporate open-source AI models into their tech stack.
40% of enterprises will proactively invest in AI governance for compliance.
Insurers will begin offering “AI hallucination insurance” covering AI mistakes.
Generative AI will see 36% CAGR growth from 2023 to 2030 as adoption surges.
Enterprises will move AI from R&D into production business applications.
AI strategies must focus on managing shadow usage and driving value.
Tech leaders feel pressure from employees and executives to adopt AI.
2024 will start the “era of intentional AI” as hype gives way to pragmatism.
These predictions underscore the need for a focused AI strategy even as generative AI brings renewed excitement to the business world. Enterprise leaders must proactively address risks as they expand responsible AI adoption.
The Forrester report is a significant indicator of just how far we’ve come in the AI and data analytics landscape, and how much further we’re set to go. As we move into 2024, the predictions made in this report provide a roadmap for businesses, technologists, and policymakers to navigate the future of AI, data analytics, and automation.
The report serves as a reminder that while the AI revolution is promising, it is not without challenges. As we transition from the hype to the pragmatism of AI, it is important to approach these challenges strategically, ensuring that the promise of AI is realized while mitigating potential risks.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,065 | 2,023 |
"Dell and Meta partner to bring Llama 2 open source AI to enterprise | VentureBeat"
|
"https://venturebeat.com/ai/dell-and-meta-partner-to-bring-llama-2-open-source-ai-to-enterprise-users-on-premises"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dell and Meta partner to bring Llama 2 open source AI to enterprise users on-premises Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The open-source Llama 2 large language model (LLM) developed by Meta is getting a major enterprise adoption boost, thanks to Dell Technologies.
Dell today announced that it is adding support for Llama 2 models to its lineup of Dell Validated Design for Generative AI hardware , as well as its generative AI solutions for on-premises deployments.
Bringing Llama 2 to the enterprise Llama 2 was originally released by Meta in July and the models have been supported by multiple cloud providers including Microsoft Azure, Amazon Web Services and Google Cloud.
The Dell partnership is different in that it is bringing the open-source LLM to on-premises deployments.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Not only is Dell now supporting Llama 2 for its enterprise users, but it’s also using Llama 2 for its use cases as well.
For Meta, the Dell partnership provides more opportunities to learn how enterprises are using Llama, which will help to further expand the capabilities of an entire stack of Llama functionality over time.
For Matt Baker, senior vice-president, AI strategy at Dell, adding support for Llama 2 will help his company to achieve its vision of bringing AI to enterprise data.
“The vast majority of data lives on premises and we now have this open access model to bring on-premises to your data,” Baker told VentureBeat. “With the level of sophistication that the Llama 2 family has all the way up to 70 billion parameters, you can now run that on-premises right next to your data and really build some fantastic applications.” Dell isn’t just supporting Llama 2, it’s using it too Dell was already providing support for the Nvidia NeMo framework to help organizations build out generative AI applications.
The addition of Llama 2 provides another option for organizations to choose from. Dell will be guiding its enterprise customers on the hardware needed to deploy Llama 2 as well as helping organizations on how to build applications that benefit from the open-source LLM.
Going a step further, Baker also noted that Dell is using Llama 2 for its own internal purposes. He added that Dell is using Llama 2 both for experimental as well as actual production deployment. One of the primary use cases today is to help support Retrieval Augmented Generation (RAG) as part of Dell’s own knowledge base of articles. Llama 2 helps to provide a chatbot-style interface to more easily get to that information for Dell.
Dell will make money from its hardware and professional services for generative AI, but Baker noted that Dell is not directly monetizing Llama 2 itself, which is freely available as open-source technology.
“We’re not monetizing Llama 2 in any way, frankly, it’s just what we believe is a really great capability that’s available to our customers and we want to simplify how our customers consume it,” Baker said.
Why Meta is optimistic about Dell support for Llama 2 Overall Llama 2 has been a stellar success with approximately 30 million downloads of the open-source technology in the last 30 days, according to Joe Spisak, head of generative AI open source at Meta.
For Meta, Llama 2 isn’t just an LLM, it’s the centerpiece for an entire generative AI stack that also includes the open-source PyTorch machine learning framework that Meta created and continues to help develop.
“We basically see here that we are really the center of the developer ecosystem for generative AI,” Spisak told VentureBeat.
Spisak commented that the adoption of Llama 2 is coming from a variety of players in the AI ecosystem. He noted that cloud providers like Google Cloud, Amazon Web Services (AWS), and Microsoft Azure are using Llama as a platform for the optimization of LLM benchmarks. Hardware vendors are also key partners according to Spisak with Meta working with companies like Qualcomm, bringing Llama to new devices.
While Llama sees adoption in the cloud, Spisak emphasized the importance of partnerships that can run Llama on-premises, with Meta’s partnership with Dell as a prime example. With an open LLM, Spisak said that an organization has options when it comes to deployment, which is important when it comes to consideration about data privacy.
“Obviously, you can use public cloud of course, but the real value here is being able to run it in these environments where traditionally you don’t want to send data back to the cloud, or you want to run things very kind of locally, depending on the sensitivity of the private data,” Spisak said. “That’s where these open models really shine, and Llama 2 does hit that sweet spot as a really capable model and it can really run anywhere you want it to run.” Working with Dell will also help the Llama development community to better understand and build out for enterprise requirements. Spisak said that the more Llama technology is deployed, the more use cases there are, the better it will be for Llama developers to learn where the pitfalls are, and how to better deploy at scale.
“That’s really the value of working with folks like Dell, it really helps us as a platform and that will hopefully, help us build a better Llama 3 and Llama 4 and overall just a safer and more open ecosystem,” Spisak said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,066 | 2,023 |
"Canva launches AI tools for education | VentureBeat"
|
"https://venturebeat.com/ai/canva-launches-ai-tools-for-education"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Canva launches AI tools for education Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Canva is not resting on its AI laurels. Less than a month after launching its new AI-powered Magic Studio , the decade-old Australian startup that’s won a massive userbase by offering cloud-based graphic design and digital multimedia tools for the non-art degree-holding masses is ramping up its efforts to court the oft-neglected education technology (edtech or edutech) sector with AI.
Last week, Canva announced its new “Classroom Magic,” a version of its Magic Studio designed specifically for teachers and students.
Nested under the existing Canva for Education product launched back in 2019 and already used by 50 million students and teachers around the globe , according to the company, the new Classroom Magic brings some of the same AI features from Magic Studio over to schools.
“When you think about the AI tools we are launching, they are all there to help teachers save time, create more engaging content for students and help the students embrace creativity,” said Jason Wilmot, Canva’s Head of Education, in an exclusive videoconference interview with VentureBeat.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Canva for Education, along with the new Classroom Magic features, are available free for all students, teachers, and districts in the K-12 level. The company offers a paid plan for universities but did not provide specifics on pricing to VentureBeat.
Canva counts more than 600,000 different schools among its existing users of Canva for Education, which means the new Classroom Magic update could be the biggest introduction of AI into the classroom in the world to date.
New AI-powered multimedia design features for teachers and students Among the AI tools is “Magic Write,” a generative AI tool that allows students and teachers to access quick actions from a dropdown menu, including summarizing text, expanding short text into longer formats, rewriting text, changing the tone and more.
For those parents and educators worried about AI discouraging students from learning how to write on their own, Canva advises in a press release that Magic Write allows students to “develop their comprehension skills through intentionally crafted prompts.” Another feature, “Magic Animate,” allows students and teachers to turn static text into moving text and automatically add transitions to presentations.
“Magic Grab,” meanwhile, automatically detects separate elements and objects within an image — say text within a classroom handout or diagrams — and lets the user/teacher/student automatically move, resize, and reposition them, even if they were not separate elements to begin with.
In a time-saving GenAI feature that is sure to please busy instructors and overworked students alike, “Magic Switch” is also ported over from the main Canva Magic Studio, which lets users “transform” their projects across formats, turning documents into presentations and vice versa with one click of a button. That’s not to say the resulting transformed file will be perfect — but it may be close, and it will at least provide a huge lift at getting started.
For students and teachers concerned about accessibility — and really, everyone should be — Canva’s new Classroom Magic provides automated “alt text” suggestions (that’s the text that appears in your desktop browser when you hover over it with your cursor, and is used by people who are vision and hearing impaired to better understand what elements are displayed).
Shielding students and schools from harmful content with AI Of course, part of the reason the edtech market is often overlooked or unserviced by software vendors is that school districts and schools themselves tend to have very strict, sometimes idiosyncratic requirements about what kinds of content and software/services are allowed in the classroom, even down to the level of discrete/individual product features and experiences.
Many school districts and schools maintain carefully curated “whitelists” of approved software and domains that are accessible by people within the institution, while others are blacklisted and blocked by the school’s network administrator and firewalls.
With GenAI going mainstream starting last year with the November 2022 release of ChatGPT, educators and district officials have been understandably nervous about how the new technology is making its way into the classroom.
Just today, the British newspaper The Guardian published an article online describing the experience of several teachers in engaging with AI in their classrooms, including one who said she “has been rethinking every single assignment she gives her students.” Canva understands the apprehension around AI being used in the classroom, and as such, is bringing about strict controls through its new Canva Shield program, available alongside its Magic Studio for general users of the graphic design platform.
“This is the first place that a lot of these students will interact with AI, so we have to make sure that we’re investing heavily into trust and safety, and making sure that these products are safe for the classroom,” said Wilmot.
As such, Canva conducts daily discussions with teachers and school districts and uses these interactions to shape Canva Shield for Education. The platform includes: “ Advanced Educator Controls : School administrators can set access permissions to these AI tools based on what they’re comfortable with.
Automatic Reviews : We use advanced technology to automatically review input prompts to prevent the creation of any inappropriate content.
Blocked Terms : As an additional precaution, we’ve blocked more than 10,000 words from being used in AI prompts to ensure content is safe for the classroom.
Reporting Options : We provide the ability to report and block any potentially unwanted terms or content.” The big question: do schools even want all this AI tech? Yet even as Canva marches forward with the new Classroom Magic suite of AI tools, the key question remains: how useful will teachers and students find them? Does anybody even want all this AI? Fortunately for the company and its aspirations of revolutionizing edtech, it already has a great head start with its massive educational user base. Market research supports the idea that students, teachers and district officials alike all see potential in leveraging AI in the classroom to help improve learning.
Canva recently conducted a survey of 1,000 U.S. teachers to gain insights into how educators are utilizing and perceiving AI in the classroom.
The survey revealed that while most teachers are excited about AI and eager to incorporate it into their teaching (78% of respondents), there is a significant knowledge gap that prevents wider adoption (93% admitted they don’t know where to start with AI tools).
Some of the benefits teachers cited include AI’s ability to boost student productivity (60%) and creativity (59%), reduce administrative burdens (56%), and support personalized learning (67%).
The report underscored the critical role technology now plays in modern classrooms, with 92% of teachers using apps and services regularly.
Teachers said they are interested in using AI to simplify language (67%), visualize data (66%), generate art (63%), edit writing and assignments and lesson plans (63%) and summarize information (62%).
Though Wilmot was careful to state that he did not see Canva’s role as “teaching AI engineering,” but rather “showcasing what AI can do” for learning.
“Our goal is just to make sure that we have a safe environment where students and teachers can learn about some of the AI capabilities within Canva and make sure that they’re demonstrating their learning,” he said.
Wilmot also noted that Canva already offers more than 5,000 lesson plans that teachers can use freely within the platform and customize to suit their needs, now more easily thanks to the Classroom Magic AI.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,067 | 2,023 |
"Biden AI exec order rolls out to applause, concerns of overreach | VentureBeat"
|
"https://venturebeat.com/ai/biden-ai-exec-order-rolls-out-to-applause-concerns-of-overreach"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Biden AI exec order rolls out to applause, concerns of overreach Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In what could go down as a historic moment in technology policy, the Biden-Harris Administration issued a sweeping Executive Order this morning on “Safe, Secure, and Trustworthy Artificial Intelligence.” The 100+ page Executive Order covers a wide variety of issues, including AI safety, bioweapons risk, national security, cybersecurity, privacy, bias, civil rights, algorithmic discrimination, criminal justice, education, workers’ rights, and research.
A few notable actions within the AI Executive Order: Developers of powerful foundation models will be required to share safety test results and other critical information with the US government.
The National Institute of Standards and Technology will set “the rigorous standards for extensive red-team testing to ensure safety before public release.” The Department of Commerce will “develop guidance for content authentication and watermarking to clearly label AI-generated content.” The US government will produce a report on AI’s potential labor-market impacts “Existing authorities” will modernize and streamline visa access , expanding “the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States.” This week, Vice President Harris will speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak.
The government will issue guidance for agencies’ use of AI — “including clear standards to protect rights and safety, improve AI procurement, and strengthen AI deployment.” Hundreds of AI experts weighed in on the exec order Not surprisingly, the reaction was quick and overwhelming, with seemingly every expert across the AI landscape, from both the public and private sector, weighing in.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Merve Hickok, president of the independent nonprofit Center for AI and Digital Policy, told VentureBeat in a phone interview this morning that she wasn’t expecting such a broad Executive Order, “but we’re happy to see it” — that is, she explained, that the US is “promoting democratic values and advanced leadership on AI governance.” The EO underlines the fact that the Biden administration “understands the real and immediate challenges of AI and they still call on bipartisan legislation from the Congress,” she explained. “So the EO is one part of the administration’s multi pronged approach, but it is definitely in the right direction.” Hicock particularly applauded the new rules on AI procurement for federal agencies, with clear standards to protect rights and safety. “We’ve been demanding that guidance from OMB [Office of Management and Budget] for multiple years now,” she said.
Many offered positive takes on how the EO tackles AI risks Many others posted their positive takes on various aspects of the AI Executive Order. For example, Jack Clark, co-founder of Anthropic, posted on X that “seeing such a heavy emphasis on testing and evaluating AI systems seems good – you can’t manage what you can’t measure.” Meanwhile, author and critic Gary Marcus, who testified about AI regulation before the Senate, along with OpenAI CEO Sam Altman in May, wrote in his Substack newsletter that “there’s a lot to like in the Executive Order. It’s fantastic that the US government is taking the many risks of AI seriously.” But, he added, “how effective this actually becomes depends a lot on the exact wording, and how things are enforced, and how much of it is binding as opposed to merely voluntary.” Others voiced concerns about AI regulatory overreach Others weighed in with their concerns about the broad nature of the Executive Order. Adam Thierer, a senior fellow at the R Street Institute, wrote an analysis provocatively titled “White House Executive Order Threatens to Put AI in a Regulatory Cage.” While some “will appreciate the whole-of-government approach to AI required by the order, if taken too far, unilateral and heavy-handed administrative meddling in AI markets could undermine America’s global competitiveness and even the nation’s geopolitical security,” he wrote, adding that “the new EO highlights how the administration is adopting an everything-and-the-kitchen-sink approach to AI policy that is, at once, extremely ambitious and potentially over-zealous.” President Biden gave staff directive to ‘move with urgency’ Still, it’s clear that the Executive Order reflects the Biden Administration’s effort to take a forward-leaning stance on shaping the evolution of AI.
According to the Associated Press , AI “has been a source of deep personal interest for Biden, with its potential to affect the economy and national security.” The AP reported that White House chief of staff Jeff Zients recalled Biden “giving his staff a directive to move with urgency on the issue, having considered the technology a top priority.” “We can’t move at a normal government pace,” Zients said the Democratic president told him. “We have to move as fast, if not faster than the technology itself.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,068 | 2,023 |
"AMD's Q3 revenues hit $5.8B, up 4% as PC CPUs grow again | VentureBeat"
|
"https://venturebeat.com/ai/amd-earnings-2"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AMD’s Q3 revenues hit $5.8B, up 4% as PC CPUs grow again Share on Facebook Share on X Share on LinkedIn AMD Ryzen Threadripper Pro 7000WX uses 64 Zen 4 cores.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Advanced Micro Devices reported that revenue in its third quarter was $5.8 billion, up 4% from a year earlier and 8% sequentially as as new products, AI chips and data center tech grew in the quarter.
AMD makes the processors and graphics processing units (GPUs) used in AI machines, are riding on high demand for AI. AMD said PC client revenue recovered in the quarter, hitting $1.5 billion in revenues, up 42% from a year ago. Gaming revenues, however, were down 8% in the quarter.
“We delivered strong revenue and earnings growth driven by demand for our Ryzen 7000 series PC processors and record server processor sales,” said AMD CEO Lisa Su, in a statement. “Our data center business is on a significant growth trajectory based on the strength of our EPYC CPU portfolio and the ramp of Instinct MI300 accelerator shipments to support multiple deployments with hyperscale, enterprise and AI customers.” In after-hours trading, AMD’s stock is down 4.4% to $94.11 a share. AMD’s value is $159 billion in overall market capitalization, which is actually higher than rival Intel at $153 billion. In a call with analysts, Su said AMD gained market share in data center chips in the quarter. She also thinks the company is positioned to gain more share as the PC market returns to normal buying patterns.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Analysts expected Q3 earnings per share to come in at 64 cents on revenue of $5.37 billion. GAAP net income for Q3 came in at $299 million, or 18 cents a share, up 353% from a year ago. On a non-GAAP basis, net income for Q3 was $1.13 billion, or 70 cents a share, up 4% from a year ago.
“We executed well in the third quarter, delivering year-over-year growth in revenue, gross margin and earnings per share,” said AMD CFO Jean Hu, in a statement. “In the fourth quarter, we expect to see strong growth in Data Center and continued momentum in Client, partially offset by lower sales in the Gaming segment and additional softening of demand in the embedded markets.” For the fourth quarter of 2023, AMD expects revenue to be approximately $6.1 billion, plus or minus $300 million. At the mid-point of the revenue range, this represents year-over-year growth of approximately 9% and sequential growth of approximately 5%. Non-GAAP gross margin is expected to be approximately 51.5%.
AMD reported $1.5 billion in sales for the client group, up 42% from a year ago, driven primarily by higher Ryzen mobile processor sales. Revenue grew 46% sequentially driven by an increase in AMD Ryzen 7000 Series CPU sales. The group returned to profitability.
For the data center, AMD sales were flat at $1.6 billion compared to last year. That was due to growth in 4th Gen AMD Epyc CPU sales, offset by a decline in adaptive system-on-chip (SoC) data center products.
Revenue increased 21% sequentially as customer adoption of 4th Gen AMD Epyc CPUs accelerated during the quarter. And AMD Instinct MI300A and MI300X GPUs are on track for volume production in the fourth quarter to support deployments with several leading HPC, cloud and AI customers.
Embedded segment revenue was $1.2 billion, down 5% year-over-year primarily due to a decrease in revenue in the communications market. Revenue decreased 15% sequentially due to inventory correction at customers in several end markets.
Gaming segment revenue was $1.5 billion, down 8% year-over-year, primarily due to a decline in semi-custom revenue, partially offset by an increase in AMD Radeon GPU sales. Revenue declined 5% sequentially due to lower semi-custom sales. That could mean that game console sales slowed a bit, as we are in the fourth year of the console cycle.
Last week, AMD’s rival Intel reported that its third-quarter revenue was $14.2 billion, down 8% from a year earlier. Third-quarter earnings per share (EPS) were 7 cents a share, while non-GAAP EPS was 41 cents a share.
Cloud adoption of AMD Epyc processors continues to grow significantly, with nearly 100 new instances from Microsoft Azure, AWS, Oracle and others available for preview and general access, including new AWS instances powered by 4th Gen AMD Epyc CPUs that deliver leadership performance and energy efficiency. Data center GPUs saw new traction for the next generation of products, Su said.
During the quarter, AMD expanded the 4th Gen Epyc CPU portfolio with the launch of the AMD Epyc 8004 Series processors, purpose built to deliver exceptional energy efficiency and performance for cloud services, intelligent edge and telco.
AMD also made significant progress powering pervasive AI across the cloud, edge and end point devices: AMD completed the acquisition of open-source AI software expert Nod.ai, expanding the company’s open AI software capabilities. Nod.ai has developed an industry-leading software technology that accelerates the deployment of AI solutions optimized for AMD Instinct data center accelerators, Ryzen AI processors, EPYC processors, Versal SoCs and Radeon GPUs.
AMD announced the AMD Ryzen Threadripper PRO 7000 WX-Series and Ryzen Threadripper 7000 processors, delivering outstanding performance for the most demanding desktop platforms. Ryzen Threadripper PRO 7000 WX-Series processors will be available later this year to DIY customers, SI partners and through OEM partners including Dell Technologies, HP and Lenovo.
AMD has $16 billion in current assets and $7 billion in liabilities. That’s a pretty good financial condition compared to many years ago.
AMD announced plans to invest approximately $400 million over the next five years to expand research, development and engineering operations in India, including the addition of approximately 3,000 new engineering roles by the end of 2028.
There are more than 50 laptop designs powered by Ryzen AI chips.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,069 | 2,023 |
"AI pioneers Hinton, Ng, LeCun, Bengio amp up x-risk debate | VentureBeat"
|
"https://venturebeat.com/ai/ai-pioneers-hinton-ng-lecun-bengio-amp-up-x-risk-debate"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI pioneers Hinton, Ng, LeCun, Bengio amp up x-risk debate Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In a series of online articles, blog posts and posts on X/LinkedIn over the past few days, AI pioneers (sometimes called “ godfathers ” of AI) Geoffrey Hinton, Andrew Ng, Yann LeCun and Yoshua Bengio have amped up their debate over existential risks of AI by commenting publicly on each other’s posts. The debate clearly places Hinton and Bengio on the side that is highly concerned about AI’s existential risks , or x-risks, while Ng and LeCun believe the concerns are overblown, or even a conspiracy theory Big Tech firms are using to consolidate power.
It’s a far cry from the united front of AI positivity they have shown over the years since leading the way on the deep learning ‘revolution’ that began in 2012. Even a year ago, LeCun and Hinton pushed back in interviews with VentureBeat against Gary Marcus and other critics who said deep learning had “hit a wall.” Hinton responded to claims that x-risk is Big Tech conspiracy But today, Hinton, who quit his role at Google in May to speak out freely about the risks of AI, posted on X about recent comments from computer scientist Andrew Ng, who did pioneering work in image recognition after co-founding Google Brain in 2011.
Andrew Ng is claiming that the idea that AI could make us extinct is a big-tech conspiracy. A datapoint that does not fit this conspiracy theory is that I left Google so that I could speak freely about the existential threat.
Hinton was responding to Ng’s comments in a recent interview with the Australian Financial Review that Big Tech is “lying” about some AI risks to shut down competition and trigger strict regulation.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! And today, in an issue of his newsletter The Batch, Ng wrote that “My greatest fear for the future of AI is if overhyped risks (such as human extinction) lets tech lobbyists get enacted stifling regulations that suppress open-source and crush innovation.” LeCun and Ng say tech leaders are exaggerating existential risks LeCun, who is chief AI scientist at Meta, responded to Ng’s comments with a recent post saying: “Well, at least *one* Big Tech company is open sourcing AI models and not lying about AI existential risk.” He was referring, of course, to his own company, Meta. He added: “Lying is a big word that I haven’t used. I think some of these tech leaders are genuinely worried about existential risk. I think they are wrong. I think they exaggerate it. I think they have an unwarranted superiority complex that leads them to believe that 1. It’s okay if *they* do it, but not okay if the populace does it. 2. Superhuman AI is just around the corner and will have all the characteristics of current LLMs.” LeCun also responded to Hinton’s post: You and Yoshua are inadvertently helping those who want to put AI research and development under lock and key and protect their business by banning open research, open-source code, and open-access models.
This will inevitably lead to bad outcomes in the medium term.
While Hinton responded to one of LeCun’s: Let's open source nuclear weapons too to make them safer. The good guys (us) will always have bigger ones than the bad guys (them) so it should all be OK.
Bengio says AI risks are ‘keeping me up at night’ Meanwhile, just last week Hinton and Bengio — who received the 2018 ACM A.M. Turing Award (often referred to as the “ Nobel Prize of Computing “), together with Hinton and LeCun, for their work on deep learning — joined with 22 other leading AI academics and experts to propose a framework for policy and governance that aims to address the growing risks associated with artificial intelligence.
The paper said companies and governments should devote one-third of their AI research and development budgets to AI safety, and also stressed urgency in pursuing specific research breakthroughs to bolster AI safety efforts.
Just a few days ago, Bengio wrote an opinion piece for Canada’s Globe and Mail, in which he said that as ChatGPT and similar LLMs continued to make giant leaps over the past year, his “apprehension steadily grew.” He said that major AI risks are a grave source of concern for me, keeping me up at night, especially when I think about my grandson and the legacy we will leave to his generation.” X-risk debate does not diminish friendship, say ‘godfathers’ of AI The debate does not diminish the long friendship between the quartet. Andrew Ng posted a photo of himself at a recent party celebrating Hinton’s retirement from Google, while LeCun did the same — posting a photo of himself with Hinton and Bengio with a caption saying: “A reminder that people can disagree about important things but still be good friends.” A reminder that people can disagree about important things but still be good friends.
pic.twitter.com/4yLXBmxOHr VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,070 | 2,023 |
"Why a DevOps approach is crucial to securing containers and Kubernetes | VentureBeat"
|
"https://venturebeat.com/security/why-a-devops-approach-is-crucial-to-securing-containers-and-kubernetes"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Spotlight Why a DevOps approach is crucial to securing containers and Kubernetes Share on Facebook Share on X Share on LinkedIn Presented by Orca Security Modern architectures like containers and Kubernetes offer both huge benefits and unique challenges. In this VB Spotlight, learn how to keep your applications secure throughout the dev cycle, the tools and platforms essential for stronger security and compliance and more! Register to watch free on-demand! By 2027, 90% of global organizations will be running containerized applications in production — an Olympic-sized jump from less than 40% in 2021. Containers are far more lightweight than virtual machines (VMs), letting developers virtualize at the operating system (OS) level, while orchestrators like Kubernetes run containers at scale. Software can be developed and deployed faster and more efficiently, at greater scale, but they also offer brand new challenges in every step of the development cycle. Best practices for building and running secure containers are emerging, from secure base images to patching vulnerabilities to secrets management and more.
In this VB Spotlight event, industry experts Neil Carpenter, principal technical evangelist at Orca Security and Jason Patterson, senior partner solutions architect with Amazon Web Services discuss why security and development have to go hand-in-hand in a containers-and-Kubernetes world, how to make that dream come true with the ideal DevSecOps journey, and more.
The security challenges of containers and Kubernetes Containers are processes running on a Linux machine, contained through the kernel (far different from the traditional VM, Patterson explains, which runs within the operating system of the host machine).
“They started to use this technology within containers to segment out processes within the OS to protect them,” he explains. “As they started to develop this technology, they implemented other kernel controls, such as cgroups, and then they also implemented namespaces. It’s a way of locking down a process and restricting it within the operating system.” But security challenges in containers and Kubernetes are very similar to old-school VM security issues, Carpenter says. If there’s a remote code execution vulnerability in Tomcat, it doesn’t matter if it’s running on VMs in the data center or on AWS — an attacker can execute code, maintain persistence and more.
“What is foundationally different is how I find that vulnerability,” he explains. “We have to go through this whole continuous integration, continuous delivery, CICD process where we build the image, test the image, ship the image and deploy the containers based on it.” That means the same problems require different approaches, different solutions, even different constituencies involved. But traditional vulnerability management tools and security tools don’t work well with containers, making it much harder to manage vulnerabilities in production.
Every container is a copy of an underlying image, and if there are one hundred running containers with a remote code execution vulnerability, security can’t simply go patch all those containers. It requires IT to step in and fix the underlying image, then retest and reship it, as well as redeploy all the containers based on top of it.
Why security needs to embrace a DevOps approach DevOps, which is heavily focused on automation, has significantly accelerated development and delivery processes, making the production cycle lightning fast, leaving traditional security methods lagging behind, Carpenter says.
“From a security perspective, the only way we get ahead of that is if we become part of that process,” he says. “Instead of checking everything at the point it’s deployed or after deployment, applying our policies, looking for problems, we embed that into the delivery pipeline and start checking security policy in an automated fashion at the time somebody writes source code, or the time they build a container image or ship that container image, in the same way developers today are very used to, in their pipelines.” It’s “shift left security,” or taking security policies and automating them in the pipeline to unearth problems before they get to production. It has the advantage of speeding up security testing and enables security teams to keep up with the efficient DevOps teams.
“The more things we can fix early, the less we have to worry about in production and the more we can find new, emerging issues, more important issues, and we can deal with higher order problems inside the security team,” he says.
It’s not a linear process, he adds, because it’s a matter of continuously refining and fixing.
Automating security in CICD pipelines You can build in security from the very start, Patterson says, ensuring that the file figuration is secure. That includes not running as root, which can give attackers access to root on the running machine; ensuring there are no world writable files, because even with restricted privileges, an attacker could still execute privilege escalation.
The base image is the foundation to build upon for source code, additional apps or changes to the operating system, to ensure the application will execute within the Kubernetes environment.
“That base image is key to making sure that you’re deploying the least amount of data, least amount of services and libraries that you need to execute your application,” he explains. “You want to use a base image that is designed for containers, that is stripped down, and has just the bare minimum in it. Then you want to make sure, that you’re looking for SUID programs or other world writable programs and stuff like that.” You can use custom checks to make sure that the container image doesn’t have vulnerable libraries, or that the application’s source code is not vulnerable.
“As you go through the code commits to your code repository, your code build is going to pull down and develop that image or compile that image and push it to an ECR container registry,” he says. “That’s when, typically, in the Amazon world, you’ll start doing the scanning, looking for vulnerabilities, and detecting issues with the container. When you use tools like Orca, you can get involved a little sooner in that process and take additional steps in that process to help secure your containers.” For a granular look at container and Kubernetes security, from overcoming common challenges like misconfigurations and secrets management to best practices for building a secure environment, establishing collaboration between IT and security, and more, don’t miss this VB Spotlight.
Watch free on-demand! Agenda Security measures for every stage of the application development lifecycle Best practices for building and running secure containers — from secure base images to patching vulnerabilities to secrets management IaC scanning to detect misconfigurations in Dockerfiles and Kubernetes deployment YAMLs What an ideal DevSecOps journey should look like The tools and platforms that support stronger security and compliance Presenters Neil Carpenter , Principal Technical Evangelist, Orca Security Jason Patterson , Sr. Partner Solutions Architect, Amazon Web Services Louis Columbus , Moderator, VentureBeat The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,071 | 2,023 |
"E-commerce fraud to cost $48 billion globally this year as attacks skyrocket, report says | VentureBeat"
|
"https://venturebeat.com/security/e-commerce-fraud-to-cost-48-billion-globally-this-year-as-attacks-skyrocket-report-says"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages E-commerce fraud to cost $48 billion globally this year as attacks skyrocket, report says Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Cyberattacks cost more than ransoms. They also damage brands and customer trust if company leaders aren’t fully committed to protecting customer data and fighting against e-commerce fraud.
Telesign’s latest Trust Index shows why CIOs, CISOs, and their teams must first see e-commerce fraud prevention as a core business challenge and consider how AI-based techniques can help. Customer trust is on the line.
Sift’s Q3 2023 Digital Trust & Safety Index amplifies Telesign’s Trust Index findings, identifying a 36% increase in online payment fraud in early 2023 driven partly by an epidemic of Account Takeover (ATO) attacks. Sift’s Index found that ATO attacks jumped 354% year-over-year in Q2 2023 across Sift’s global network after reaching a 169% increase year-over-year in 2022.
Fraud attackers using AI mine trust gaps for cash The more successful a fraud attack is, the more it damages a brand. Left unchecked, e-commerce fraud will decimate a brand, its goodwill, and its trust, driving customers away to competitors. It’s on CIOs and CISOs to get e-commerce fraud detection and response right. Telesign found that 94% of customers hold businesses accountable and believe they must be responsible for protecting their digital privacy.
Sift found that cybercriminals and fraudsters rely on AI and cutting-edge automation techniques that democratize access, resulting in new fraud-as-a-service offers. One of the most visible and highly subscribed is FraudGPT.
Fraud schemes are becoming so pervasive that 24% of those surveyed report having seen offers to participate in account takeover schemes online.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Telesign’s Trust Index found that 44% of data breach victims tell friends and family not to associate with a brand that’s been breached. 43% quit associating with the brand, and 30% of data breach victims share the incident on social media, further amplifying the event.
Sift’s Index found that 73% of consumers believe the brand is accountable for ATO attacks and responsible for protecting account credentials. Only 43% of account takeover victims were notified by the company that their information had been compromised.
Online fraud attacks target a new generation of victims The 2023 Telesign Trust Index reveals the damage fraudsters do to brands while stealing from their most loyal customers. What makes Telesign’s Index noteworthy are its findings of how fraudsters target younger consumers for digital fraud.
The Index found that the greater a person’s exposure to the Internet, the greater their risk of fraud. 18- to 34-year-olds spent the most time online of all age groups, with 75% spending three or more hours online daily. SEON’s Gen-Z Fraud Report found that individuals younger than 20 were subjected to a staggering 116% increase in fraud incidents between 2019 and 2020, resulting in collective losses of approximately $70.98 million in 2020 or $3,000 per person.
They’re closely followed by 35- to 54-year-olds, with 70% of this group spending three or more hours online daily. Fraud disproportionately affects millennials (age 25-44), who are 4x more likely to be victims than seniors (65+). 56% of millennial victims experienced account hacking. This debunks the stereotype that older people are most vulnerable to fraud.
Avoiding the high cost of losing consumer trust E-commerce losses attributable to online payment fraud were estimated at $41 billion globally in 2022, growing to $48 billion this year. The cumulative merchant losses to online payment fraud globally between 2023 and 2027 will exceed $343 billion.
E-commerce losses to online payment fraud are expected to exceed $48 billion globally this year.
Losing consumer trust by being careless about protecting their data has a cascading effect. Not only do brands lose customers for life, many pay settlements to compensate consumers for damages. One of the most well-known was the $190 million settlement to 98 million customers CapitalOne paid after consumer data was stolen in a breach.
“Organizations that cultivate trust will build unbreakable bonds with customers, attract the most dedicated talent, and create new business models with partners — all while minimizing risk,” writes Enza Iannopollo, Principal Analyst, Forrester, in her blog post, Predictions 2023: Organizations That Maintain Trust Will Thrive.
Certifying trust is a must-have in e-commerce.
Telesign is taking a unique approach to helping its customers reduce and potentially eliminate the high cost of losing customer trust by providing a Trust Certified Badge that reassures consumers that the online business they’re buying from is legitimate. E-commerce sales need to prove that they are protecting customers’ digital identities, safeguarding their digital ecosystems from fraud, proactively preventing and detecting digital crime on their systems, and responding to fraud threats when they arise.
Kristi Melani, Telesign CMO and Head of GTM Strategy says, “In today’s digital economy, trust is a valuable currency for online business transactions. Telesign believes in creating a digital world built on Continuous Trust. The time is now to prioritize trust, and our Trust Certified Badge is an important step forward in deepening consumers’ confidence in the digital platforms they engage with. The Trust Certified Badge indicates to consumers that they are entering a space that protects their personal information and puts their safety first.” How AI can help grow customer trust Online fraud attacks take many forms, from promotion abuse and fake accounts to account takeovers (ATO). These many forms of e-commerce fraud are an ideal use case for AI and machine learning (ML).
Every provider takes a unique approach to the challenge.
Telesign uses ML-based algorithms to perform real-time phone number risk-scoring that identifies anomalous, potentially malicious activity in real time and immediately delivers a reason code that can help reduce the incidence of attacks in the future. Leading vendors using AI and ML to protect against e-commerce fraud include Ekata , Kount , Sift , Signifyd , Riskified , and others.
E-commerce businesses need to consider how they can use AI and ML-based apps, tools, and techniques to protect themselves and their customers against fraud.
The following are a few attack strategies fraudsters use, with a brief overview of how AI can help shut them down.
Account Takeover (ATO) Attacks.
AI and ML are helping to shut these kinds of attacks down by analyzing behavioral patterns in real time and tracking transaction data to find any anomalies. These lethal attacks leave consumers tens of thousands of dollars in unauthorized charges.
18% of those surveyed have experienced account takeover attacks, with 62% of those taking place in the past year. Worse, 34% of victims were defrauded 2+ times, typically while using sites or apps for digital subscriptions, online shopping, and financial services.
Business Email Compromise (BEC) is part of a broader attack strategy.
VentureBeat has learned that several CEOs in the enterprise software industry have had deepfakes made of their voices and, combined with an orchestrated BEC attack campaign, can lead to tens of thousands of dollars being stolen within minutes by attackers. AI and ML-based fraud detection and response systems combined with human threat hunters are part of a managed Detection and Response (MDR) system that has successfully contained breaches that start with BEC.
Fake accounts and synthetic identities.
Fraudsters buy all available identity and personally identifiable information (PII), including social security numbers, birth dates, addresses, employment histories, and other information to create fake or synthetic identities. They then apply for new accounts that many existing fraud detection models perceive as legitimate, granting credit to the attackers. On pace to defraud financial and commerce systems by nearly $5 billion by 2024 , synthetic identity fraud is among the most difficult to identify and stop. Integrating user authentication, identity proofing, and adaptive authentication workflows to get the most value from machine learning insights is a start, and all fraud detection systems battling this problem also rely on risk scoring calculated in real time.
Promotions Abuse.
From attempting to duplicate coupons and digital sales codes to fraudulently filing promotions claims, this area is where AI and ML-based platforms continue to help e-commerce businesses avoid substantial losses. Telesign’s approach to triangulating phone number behavior, detecting multiple accounts from phone number attributes, and flagging potential promotion abuse using a telephone number is noteworthy.
Expect to see new AI-based attacks during the holidays.
Telesign’s Trust Index and Sift’s latest Index reflect how online fraud is becoming more lethal as attackers adapt AI to fine-tune their tradecraft. For any organization with an e-commerce channel, it’s on the CIO and CISOs to get e-commerce fraud detection and response right. Customer trust hangs in the balance, and so does the holiday season, by far the most lucrative of the year. Fraud attacks will spike going into the holidays, and now is the time for any e-commerce business to close the gap where fraud has happened in their businesses.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,072 | 2,023 |
"Revefi secures $10.5M in seed funding, launches AI-powered enterprise data platform | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/revefi-secures-10-5m-in-seed-funding-launches-ai-powered-enterprise-data-platform"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Revefi secures $10.5M in seed funding, launches AI-powered enterprise data platform Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Revefi , a Seattle-based AI startup aiming to be a “copilot” for enterprise data teams, has secured $10.5 million in seed funding in a round led by Mayfield managing partner Navin Chaddha. Other participants included GTMfund, Neythri Futures Fund, and more than 10 other strategic investors.
The company also announced today that it launched an AI-powered enterprise data platform, called the Data Operations Cloud.
The platform is designed to help data teams save time and money, and keep on track with their sprints, providing critical value without requiring a significant time investment from business leaders and IT professionals.
In an exclusive VentureBeat interview, Revefi co-founders Sanjay Agrawal and Shashank Gupta shared their vision for the future of cloud data management and how they plan to revolutionize the sector.
“We are about truly helping and being part of the data team’s journey,” said Agrawal. He emphasized the startup’s commitment to a “zero touch” approach, adding that, “It has to work right out of the box and bring the customer value instantly, end to end.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The product’s broad applicability is another key point of differentiation. “Our customers range from companies early in their journey to $10 billion public companies. Our target persona is a data engineering team or data infrastructure team leader responsible for getting data into a cloud data warehouse for business purposes,” Agrawal explained.
Co-founder Shashank Gupta echoed Agrawal’s sentiments, stressing the importance of their zero-touch approach and its importance for automation. “We really believe in having the right data, right time [commitment], and right costs,” said Gupta.
Revefi’s platform uses AI to complement human knowledge, reflecting the co-founders’ belief in the symbiotic relationship between technology and human expertise. Their go-to-market strategy is built for adoption and aims to be low friction, highly engaging, and extremely disruptive.
In a world increasingly driven by data, Revefi’s platform could be transformative. The company’s AI-powered technology is designed to optimize data management processes, with a focus on reducing costs and boosting efficiency.
Given the strong product-market fit and the recent influx of $10.5 million in seed funding, Revefi is well-positioned to make significant strides in the cloud data management sector. Their commitment to instant value and a zero-touch approach differentiates them in a crowded market, marking Revefi as a company to keep an eye on in the coming months and years.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,073 | 2,023 |
"DataStax takes aim at event-driven AI with open source LangStream project | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/datastax-takes-aim-at-event-driven-ai-with-open-source-langstream-project"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DataStax takes aim at event-driven AI with open source LangStream project Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Generative AI more often than not works with static sources of data — but what if an organization wants to benefit from real time streaming data? That’s one of the goals underpinning the new LangStream open source project, led by DataStax.
The LangStream project was quietly soft launched by DataStax on Sept. 13 and the effort has iterated rapidly in the weeks since, with a new release out today that expands integration points to make the technology more useful. LangStream initially only worked with DataStax’s AstraDB database and now it supports a series of vector databases including Milvus as well as Pinecone.
The basic idea behind LangStream is to enable developers to more easily work with streaming data sources (sometimes referred to as data in motion), to help build what are known as event driven architectures. In an event driven architecture, an event, which could be a new data point coming in from a stream, is able to trigger or ‘drive’ another action. Event driven architectures are at the foundation of real time applications as well, enabling applications to benefit from data as it comes into a platform. This allows generative models to take the latest contextual data into account when formulating responses or completing tasks.
“LangStream is a way to build generative AI applications in an event driven way,” Chris Bartholomew, head of streaming engineering at DataStax told VentureBeat in an exclusive interview.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Bartholomew is no stranger to the world of streaming data, having previously been the founder and CEO of streaming data vendor Kesque, which was acquired by DataStax in 2021. Kesque developed technology based on the open source Apache Pulsar streaming data project, which has now become the foundation of the DataStax Astra Streaming service.
How LangStream works to enable event driven Generative AI As it turns out, LangStream currently doesn’t rely on Apache Pulsar, rather it makes use of the open source Apache Kafka technology which is widely used today for event data streaming.
Bartholomew explained that LangStream uses a standard stream processing model where it takes in messages or events, processes them, and sends them out. LangStream is particularly useful in combination with vector database technologies in support of Retrieval Augmented Generation (RAG) operations where generative AI models are able to cite up-to- date data.
As data is pulled into a model for RAG, each new piece of data needs to have a vector embedding generated so that it can be used in a vector database. With the real time nature of streaming data, there is a need to have embeddings created in a synchronous data pipeline, which is what LangStream aims to enable. Bartholomew noted that LangStream is agnostic about which particular vector embedding model is used and can support multiple models today including open source models hosted on Hugging Face as well as Google’s Vertex AI.
“A lot of what we’re doing is taking the pipeline streaming, event driven paradigm and we’re taking it to GenAI applications,” he said.
The future of LangStream While it’s still early days for LangStream, the project is moving rapidly and there is lots of potential as the community of users grows.
“LangStream can greatly benefit developers working with generative AI as it helps them to easily build applications and simplifies the process of coordinating data from a variety of sources to enable high-quality prompts for LLMs,” Davor Bonaci, CTO and Executive Vice President of DataStax, told VentureBeat. “This makes it far simpler to build scalable, production-ready, real-world AI applications on a broad range of data types.” LangStream is being developed as an open source project, which is consistent with how DataStax has worked with other technologies it relies on for its commercial efforts including Apache Pulsar and the Apache Cassandra database.
“DataStax has a long history of working with open source communities,” Bonaci said. “It only seems fitting to contribute to yet another open source project, especially one that is so relevant to developers working with today’s most popular technologies.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,074 | 2,023 |
"Amazon Bedrock is now generally available as AWS enterprise GenAI efforts get serious | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/amazon-bedrock-is-now-generally-available-as-aws-enterprise-genai-efforts-get-serious"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon Bedrock is now generally available as AWS enterprise GenAI efforts get serious Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Making a foundation model for generative AI available is just the beginning, not a comprehensive solution, in supporting the complex demands of enterprise use cases.
Today in a major step forward, Amazon Web Services (AWS) announced the general availability of its Amazon Bedrock service, a vital tool in meeting the requirements of enterprise applications for generative AI.
AWS first introduced Amazon Bedrock in April as a preview service, offering a series of foundation models as a service on its cloud platform. The preview was subsequently expanded in July with the addition of more models, including Anthropic Claude 2 and Stability AI SDXL 1.0 models. Now readily available, Amazon Bedrock supports a range of models, including the company’s own, Amazon Titan Embeddings.
The transition of a service to general availability on AWS is not a decision made lightly; rather, it’s the culmination of rigorous testing and subsequent enhancements, informed by initial user feedback.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “It’s a normal process for us to launch something in preview, testing closely with a few customers to get feedback and these are very deep interactions, so we don’t want to start with a lot of people,” Vasi Philomin, VP and GM for Generative AI at Amazon told VentureBeat. “We’ve got a team that has to interact with these customers to really understand where we could do better and what other things we may be missing.” How Amazon Bedrock has improved to be enterprise ready for GA The path to general availability is about reliability and hardening such that the service is production ready for enterprise workloads.
Among the many things that AWS has improved in Amazon Bedrock to enable enterprises, is adding compliance for regulations. One such regulation that the service is now compliant with according to Philomin is the European Union’s GDPR (General Data Protection Regulation).
“We’re talking about enterprise customers and they need to be in compliance with GDPR and that requires a lot of work and we’ve done all of that,” he said.
As part of compliance, enterprises typically also need observability and audit capabilities. To that end, Amazon Bedrock as a generally available service also now integrates with the Amazon CloudWatch service for logging.
Cost control is another critical component for making any service ready for broad enterprise consumption. After all, most organizations have accounting departments and budgets that need to be respected.
Provision throughput is a capability that AWS announced for Amazon Bedrock as part of today’s updates. It allows customers to pay for a set amount of throughput from a generative AI model, guaranteeing cost protections and performance levels. With provision throughput, customers can specify how many “model units” or tokens they need, avoiding throttling issues if demand spikes.
Philomin noted that the provision throughput feature gives customers guaranteed cost caps and assured throughput for their applications, which is important for truly adopting these technologies at scale in an enterprise setting.
Amazon Titan embeddings brings new power to generative AI accuracy A key part of the general availability today is Amazon Titan Embeddings model, which AWS built on its own.
Amazon Titan Embeddings is useful for retrieval augmented generation (RAG) use cases, which help to dramatically improve the accuracy of generative AI. It works by taking words as input and converting them to mathematical vector representations known as embeddings. This allows it to break down documents and queries into an embedding space, improving accuracy when retrieving relevant document fragments to use as answers.
Philomin commented that when Amazon Titan Embeddings was first made available in preview, the initial group of users had a lot of feedback. One of the things they asked for was a larger token window, to enable the model to handle larger documents. That change is now reflected in the generally available service, to help ensure it can meet enterprise requirements.
Amazon Titan Embeddings are also being used in combination with other large language models (LLMs) on Amazon Bedrock. Philomin noted that Amazon Titan Embeddings are being used by some customers in combination with Anthropic’s Claude2 model to implement chatbots where knowledge is stored externally as documents. The Titan embeddings model embeds documents into a vector space, while Claude2 is used for the conversational capabilities. This allows the chatbot to retrieve relevant knowledge fragments from the embedded documents to answer questions, without requiring retraining of the language models as the knowledge sources evolve.
CodeWhisperer previews new features Alongside the general availability of Amazon Bedrock, AWS also today announced a preview of new capabilities for Amazon CodeWhisperer generative AI service.
The new capabilities now enable enterprise users to benefit from an organization’s own private code repositories in a safe and secure manner.
“This unlocks new levels of developer productivity,” Philomin said. “General coding assistants are usually their general purpose; they know how to write code generally, but they wouldn’t know anything about your internal code, because they’ve never had an opportunity to learn from that.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,075 | 2,023 |
"The Data Ownership Protocol (DOP) is a real game changer for Ethereum | VentureBeat"
|
"https://venturebeat.com/business/the-data-ownership-protocol-dop-is-a-real-game-changer-for-ethereum"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Contributor Content The Data Ownership Protocol (DOP) is a real game changer for Ethereum Share on Facebook Share on X Share on LinkedIn In today’s digital world, crypto transactions come at a steep privacy cost. The minute you send assets to a friend, your entire financial history is exposed on the blockchain for anyone to see. But what if you could take back control? Enter DOP — the Data Ownership Protocol.
DOP is a revolutionary protocol built on Ethereum that leverages zero knowledge cryptography to give users total control over their data. With DOP, you get to decide what information you share, who you share it with, and what remains private.
Importantly, DOP will implement safeguards like zero-knowledge KYC and a governance committee to prevent bad actors from abusing the network. Users can transact freely without worrying about who they are interacting with.
The first phase of DOP enables fully private crypto transactions on Ethereum. Send and receive assets without publicly disclosing balances or transaction details. The technology is already complete and ready to empower users to transact freely without fear of exposing their financial activity.
But DOP is about more than just privacy. It’s about giving you back control of your data. In the next phase, DOP will allow you to choose exactly what holdings and activity you want to reveal — on a case by case basis. Search for your wallet on Etherscan and it will show nothing, but look it up on DOPscan and you can disclose a customized profile showing approved info like NFTs, token balances or transaction history.
DOP doesn’t stop there. Future phases will enable developers to build apps on top of the protocol, allow decentralized lending and trading while maintaining privacy, and even facilitate an entire internal DOP ecosystem with NFT markets, ICO platforms and DEXs.
The vision behind DOP is bringing crypto to the mainstream by letting users control their data. No more forced transparency at the hands of public blockchains.
DOP is already making waves. Last month, they sponsored a Binance Campus event introducing DOP to top influencers in Latin America. In November, DOP will be a highlight sponsor of Binance Blockchain Week, exposing the protocol to thousands of new potential users.
VentureBeat newsroom and editorial staff were not involved in the creation of this content.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,076 | 2,023 |
"Expanding Trading Opportunities Across Capital Markets, OneChronos Secures $40 Million Series B Investment Round | VentureBeat"
|
"https://venturebeat.com/business/expanding-trading-opportunities-across-capital-markets-onechronos-secures-40-million-series-b-investment-round"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Expanding Trading Opportunities Across Capital Markets, OneChronos Secures $40 Million Series B Investment Round Share on Facebook Share on X Share on LinkedIn Round led by Addition; New funds will help OneChronos expand its product offering and footprint across capital markets NEW YORK–(BUSINESS WIRE)–September 29, 2023– OneChronos , a technology company leveraging advances in auction theory and artificial intelligence to optimize financial markets, announced today the completion of its Series B investment round of $40 million. The financing was led by Addition.
OneChronos operates Smart Market periodic auctions at the speed, scale, and resiliency required of the most demanding electronic capital markets. Starting with U.S. equities, these auctions optimize for “best execution,” fostering competition on transaction quality rather than speed. Launched in Q3 2022, the Company has so far facilitated more than $60 billion in institutional securities transactions, with volumes growing more than 35% month over month.
Growth has been driven by strong execution performance, as evaluated by OneChronos’ network of over 45 banks, brokers, and dealers servicing thousands of asset managers for their electronic trading needs.
The funds raised will help OneChronos expand to new markets and launch new products that allow additional strategy-level constraints within auctions. Doing so will unlock mutually beneficial trading opportunities missed by legacy auction and market formats, and further enhance potential execution quality of trading algorithms and workflows.
“OneChronos has clearly demonstrated the execution quality advantage of its product in a highly competitive and established market,” said Andrew Miskiewicz, Investor at Addition. “We look forward to supporting the OneChronos team as they continue solving the unique challenges of operating high-speed Smart Markets, positioning them as the industry leader.” While Smart Markets have been used in other industries, their use in capital markets requires unprecedented speed, scale, and resiliency. A core technical challenge is running these auctions within milliseconds. The OneChronos team operates at the intersection of state-of-the-art computer engineering, AI/ML approaches, mechanism design, and operations research; all of which are applied to capital markets in order to meet these unique needs.
Kelly Littlepage, Chief Executive Officer and Founder of OneChronos, said, “We founded OneChronos as a team of institutional investors, traders, and technologists with a deep appreciation for the cost and complexity of institutional trading and the staggering investment returns lost to market friction. Markets like digital display advertising drew their inspiration from capital markets but quickly evolved beyond after seeing the compelling results modern market mechanisms delivered. Capital markets still use 200+ year-old mechanics predating modern theory and computing capabilities. We’re incredibly excited to partner with Addition in our journey to change that, and grow the global GDP by helping institutions better optimize their investment objectives.” About OneChronos OneChronos is a technology company of diverse thinkers innovating at the intersection of capital markets, mechanism design, and operations research, working to grow the global GDP by designing and operating matching markets leveraging advances in auction theory and artificial intelligence.
View source version on businesswire.com: https://www.businesswire.com/news/home/20230929257926/en/ Media [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,077 | 2,023 |
"CIBC Innovation Banking Provides Funding to Nanoprecise Science Corp. | VentureBeat"
|
"https://venturebeat.com/business/cibc-innovation-banking-provides-funding-to-nanoprecise-science-corp"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release CIBC Innovation Banking Provides Funding to Nanoprecise Science Corp.
Share on Facebook Share on X Share on LinkedIn VANCOUVER, British Columbia–(BUSINESS WIRE)–September 29, 2023– CIBC Innovation Banking announced today that it has provided a $3.5 million debt financing facility to Edmonton-based, Nanoprecise Sci Corp. (Nanoprecise). The company plans to use the funds for working capital purposes.
Nanoprecise is an automated AI-based maintenance solution provider that helps companies read and interpret data to effectively predict the remaining useful life of any machine or asset. Nanoprecise’s diagnostics provide early detection of changes in machine operations, helping reduce downtime and prevent production delays.
“It was really important for us to work with an experienced financial institution that understood our vision as we’re excited about the massive opportunity for new and innovative solutions in the asset prediction space,” said Sunil Vedula, CEO at Nanoprecise.
The company has become a trusted solution provider in the asset management industry with offices in Canada, the U.S., India, and Europe.
“We are very excited to work with Nanoprecise as it continues to expand across Canada and abroad,” said Joe Timlin, Managing Director, CIBC Innovation Banking. “Through its easy-to-use suite of products, Nanoprecise is helping companies save time and money with innovative technology.” “Nanoprecise is growing at an accelerated pace, and this is the right time for us to foster a long-term relationship with a team like CIBC Innovation Banking which has a global presence and range of products and services to meet our unique needs,” added Nouman Usami, VP of Finance, Nanoprecise.
About CIBC Innovation Banking CIBC Innovation Banking delivers strategic advice, cash management and funding to innovation companies across North America, the UK, and select European countries at each stage of their business cycle, from start up to IPO and beyond. With offices in Atlanta, Austin, Boston, Chicago, Denver, Durham, London, Menlo Park, Montreal, New York, Reston, Seattle, Toronto and Vancouver, the team has extensive experience and a strong, collaborative approach that extends across CIBC’s commercial banking, private banking, wealth management and capital markets businesses.
About Nanoprecise Nanoprecise based in Edmonton, AB and founded in 2017, utilizes AI and machine learning to monitor the useful life of equipment and predict any required maintenance or issues, before they cause disruption to plant operations. The Company utilizes Internet of Things (IoT) + AI to increase safety, reduce unplanned outages and maintenance costs. The Company has approximately 100 employees with offices in Canada, India, the USA and throughout Europe.
View source version on businesswire.com: https://www.businesswire.com/news/home/20230929561136/en/ Katarina Milicevic, [email protected] , 416-784-6108 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,078 | 2,023 |
"CIBC Innovation Banking Provides Financing Support for Clearhaven Partners' Investment in Korbyt | VentureBeat"
|
"https://venturebeat.com/business/cibc-innovation-banking-provides-financing-support-for-clearhaven-partners-investment-in-korbyt"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release CIBC Innovation Banking Provides Financing Support for Clearhaven Partners’ Investment in Korbyt Share on Facebook Share on X Share on LinkedIn BOSTON–(BUSINESS WIRE)–September 28, 2023– CIBC Innovation Banking announced today that it has provided financing support for Clearhaven Partners’ investment in Korbyt, an industry-leading workplace experience platform.
The financing will support Clearhaven’s investment in the company and Korbyt’s growth trajectory by helping expand reseller and technology partnerships, facilitating customer support growth, and fueling additional innovation in Korbyt’s cloud-native software.
Korbyt’s SaaS platform, Korbyt Anywhere, is a leading next-generation technology for corporate omni-channel communications and content management, enabling organizations to create and distribute compelling messages, visualize mission-critical data, and boost employee productivity.
“Korbyt is on an accelerated growth trajectory with significant investments made in our product, team and customer support,” said Ankur Ahlowalia, CEO of Korbyt. “The team at CIBC Innovation Banking is a valued partner to the business as we look to make additional investments in growth and provide best-in-class customer experiences.” “We are pleased to continue our banking relationship with Korbyt,” said Andrew Phillips, Managing Director, CIBC Innovation Banking. “The company has built leading digital signage and workplace experience software, and has continued to innovate and scale at an impressive rate. We are thrilled to support Korbyt’s growth and strategic objectives, and work with the incredible team at Clearhaven Partners.” About CIBC Innovation Banking CIBC Innovation Banking delivers strategic advice, cash management and funding to innovation companies across North America, the UK, and select European countries at each stage of their business cycle, from start up to IPO and beyond. With offices in Atlanta, Austin, Boston, Chicago, Denver, Durham, London, Menlo Park, Montreal, New York, Reston, Seattle, Toronto and Vancouver, the team has extensive experience and a strong, collaborative approach that extends across CIBC’s commercial banking, private banking, wealth management and capital markets businesses.
About Korbyt Korbyt Anywhere is a workplace experience platform that enables companies to reach targeted audiences and deliver relevant content, data and information, while also enabling easy access to systems and resources on any screen, anywhere. Powerful, cloud-based CMS capabilities engage employees via a wide range of channels, including digital signage, desktop, email and mobile devices. With 1M+ deployed endpoints and industry-leading cloud migration capabilities, Korbyt Anywhere is the most advanced platform for engaging employees and customers. The Company is headquartered in Dallas, Texas, with additional offices worldwide including the United Kingdom. For more information, visit https://www.korbyt.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20230928673763/en/ Katarina Milicevic, [email protected] , 416-784-6108 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,079 | 2,023 |
"Zerobroker eliminates freight broker fees with AI-powered logistics platform | VentureBeat"
|
"https://venturebeat.com/ai/zerobroker-eliminates-freight-broker-fees-with-ai-powered-logistics-platform"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zerobroker eliminates freight broker fees with AI-powered logistics platform Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Logistics startup Zerobroker has raised $6.5 million in seed funding to expand its AI-powered platform that removes freight brokers from the shipping process. The round included participation from Flexport, FundersClub, Streamlined Ventures, and others.
The San Francisco-based startup’s technology aims to reduce costs and streamline logistics for shippers. Led by founder and CEO Georgy Melkonyan, the firm is on a mission to introduce clarity and efficiency into an industry often criticized for being opaque.
“We bring transparency to an industry that historically has not been transparent,” said Zerobroker founder and CEO Georgy Melkonyan in an interview with VentureBeat. “We show who a shipper is, who the trucker is. There is no hidden information.” By connecting shippers directly with carriers, Zerobroker eliminates the 20-30% commission fee traditionally charged by freight brokers on each transaction. This allows shippers to lower their transportation costs significantly.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The platform uses artificial intelligence to automate up to 90% of repetitive tasks performed by logistics teams, such as managing freight, paperwork and payments. It provides real-time visibility into shipments and ensures full compliance.
“With Zerobroker, shippers can create a shipment and the system takes care of everything else – informing customers, suppliers, automating payments and paperwork,” Melkonyan explained. “Everyone gets notified automatically.” Since launching in February 2022, Zerobroker has been adding 50% more customers every month and has not lost a single client. The company’s rapid growth shows the demand for digital innovation in the logistics industry.
The new funding will help Zerobroker expand its engineering team to build more advanced capabilities into the platform, like long-term contract pricing. The startup also plans to broaden support for additional freight modes like LTL and flatbed.
Zerobroker’s success demonstrates the power of AI and automation to transform traditional industries. By removing middlemen brokers and optimizing workflows, data-driven platforms like Zerobroker are driving down costs and making supply chains more efficient. As Melkonyan noted, “Logistics is the backbone of any economy.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,080 | 2,023 |
"What comes after AIOps? Atera says it’s AI powered IT (AIT) | VentureBeat"
|
"https://venturebeat.com/ai/what-comes-after-aiops-atera-says-its-ai-powered-it-ait"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What comes after AIOps? Atera says it’s AI powered IT (AIT) Share on Facebook Share on X Share on LinkedIn VentureBeat made with Stable Diffusion Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The concept of AIOps, that is using AI to help optimize IT operations is not a new one and predates the modern era of generative AI Generative AI can bring more capabilities to IT operations, according to IT management platform vendor Atera , which today is announcing its new AI-powered IT (AIT) platform. Back in January, Atera began to integrate generative AI into its platform, to help enable its users to more easily write IT automation and operations scripts.
Now the company is going significantly further, partnering with Microsoft and deeply integrating with the Azure OpenAI service to go beyond scripts and help bring IT operations to a new level of AI powered automation.
The new Atera AI-powered capabilities include Autopilot, Copilot, and the Toolbox functionality. Autopilot aims to automatically fix IT issues through AI before they escalate to a human. Copilot provides AI powered generated actions and answers to help IT solve issues, while Toolbox provides AI tools for specific IT operations tasks.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We went all in with AI and we’ve changed the product,”Gil Pekelman, CEO of Atera, told VentureBeat in an exclusive interview. “In a sense, we’ve changed the way IT is done through AI.” How Atera is moving beyond AIops with generative AI Atera has been developing a remote IT operations and management platform since 2011, building out its own set of capabilities and knowledge base for IT.
The addition of generative AI moves Atera beyond processes that always require human interaction, to more automated capabilities that can help IT teams and their users to more rapidly solve issues.
Pekelman explained that now with Atera’s AIT platform, the autopilot capabilities can help to solve issues. For example, among the most common types of complaints that IT users will often have is that their internet connection is slow. In that case, Atera can automatically run a series of checks on the user connection, including the network interface card and system resources to identify the root cause of the issue.
The Atera system will either come to a conclusion and then enable the user to rapidly solve it with a single click, or it will escalate the issue to a human IT operator to help solve it. The human operator will be aware of what the automated system has already attempted and then benefit from the copilot capabilities to help solve the user issue.
According to Pekelman, it is the combination of his firm’s generative AI capabilities that move Atera beyond AIOps.
“When you look at the combination of these capabilities, this is AI powered IT,” he said. “It’s not AIops, it’s not running on a lot of data and trying to make sense of it, it’s really operating it using AI.” How Atera’s AIT platform works Atera is using the Microsoft Azure OpenAI service to help with many of its automations, though Atera CTO Oshri Moyal emphasized that it’s a fairly complicated setup when you dig in.
“We call it the brain, and we have many microservices combined together, it’s not just a simple API call to OpenAI,” Oshri told VentureBeat.
Some of the AI capabilities in the Atera platform were built internally, including the company’s sentiment analysis capabilities. Now Atera users when looking at a list of trouble tickets can get a visual identification of the sentiment of users about a given issue.
As the platform is trained on the specific actions that the Atera platform can enable, Oshri also claimed that the risk of any type of hallucination is very low to non-existent. Oshri noted that in the Atera use case, the technical answers are not being invented, they are being inferred from the data that the model was already trained on.
The overall goal of the Atera AIT capabilities is to increase efficiency. Pekelman estimated that the platform can increase an IT professional’s ability to close more trouble tickets from 7 to 70 per day and reduce resolution times from hours to minutes.
“We’re giving a 10x efficiency to IT departments,” Pekelman said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,081 | 2,023 |
"Vise Intelligence wants to use AI to assist financial advisors | VentureBeat"
|
"https://venturebeat.com/ai/vise-intelligence-launches-new-ai-to-assist-not-replace-financial-advisors"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Vise Intelligence is a new AI to assist — not replace — financial advisors Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Of all the industries rushing to embrace generative AI , it seems odd that we haven’t heard — nor reported more on fintech, and more specifically, financial advisors.
But here comes Vise to buck the trend. The seven-year-old New York City fintech former unicorn co-founded by two (at the time) 16-year-olds suffered a bout of bad press over the last several years, losing 35% of its assets under management (AUM) in a matter of months, only to see Business Insider reporting on the departure of its largest client, Manhattan West, and the loss of more than 100 employees since its start, through attrition and layoffs.
The Business Insider report suggested the co-founders’ youth, inattention, and inexperience may have led to these issues, and the duo later admitted to RIABiz they needed to change course and do a “hard reset.” Now, Vise is ready for a big comeback and more focused than ever with the release of its new AI service, Vise Intelligence , a conversational AI model designed to support human financial advisors by preparing them reports, answering their questions, and surfacing up-to-the-minute information about investment portfolios to go over with their clients. Vise Intelligence will be available to all advisors who use the Vise platform at no additional cost, according to the company.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Supporting financial advisors with artificial intelligence that can help do their jobs better will create better investment outcomes for all the clients that use them, and make [the financial advisors] more accessible to more people,” said Samir Vasavada, CEO and one of the firm’s original co-founders, in a video call interview with VentureBeat.
For example, a financial advisor who uses Vise Intelligence could prompt the assistant with the phrase “Ava Harris called with concerns about investing in energy companies,” referring to a client’s phone call. Vise could then provide the financial advisor with information about that client’s specific portfolio, suggest ways for the advisor to tweak to the client’s wishes, and then draft an email to send to the client about the changes the advisor would implement in the investment strategy.
In this way, the human advisor and their client remain in control, but Vise Intelligence is always standing by to act as a helpful assistant capable of pulling together information and suggesting how it could be used.
Where AI and fintech collide Vasavada co-founded Vise back in 2016 alongside Runik Mehrotra, still its chief investment officer (CIO), a year before the generative AI boom got started with the publication of the “ Attention Is All You Need ” paper by Google researchers on arxiv that led to the transformer model architecture now underpinning Vise Intelligence and most other leading AI models, such as OpenAI’s ChatGPT.
The company offers “highly personalized portfolios, fully automating the investment management process, and providing deep insights on each investment decision,” according to one of its earlier funding announcements , and will continue to do so, paired with Vise Intelligence.
As for what specific AI is being used to enable Vise Intelligence, Vasavada didn’t provide details, but in a Medium post published today , the CEO wrote: “Vise Intelligence is powered by cutting-edge large language models, which we’ve fine-tuned using relevant investment and portfolio management data.” RIABiz reported earlier that Vise planned to “incorporate new AI models, like ChatGPT, where applicable,” and quoted Mehrota saying it would “end up building functionality that sits on top of one of these pre-trained models … we’re not always going to be internal forever … [what] we build will be technology that sits on top of models these AI companies come out with,” so it is probable that GPT is powering some of the tech.
However, Vasavada did note that Vise Intelligence was designed to ingest information from every specific client in an financial advisor customer’s portfolio — from the “high net-worth to the low-net worth… small-single person firms that are managing money for teachers and firemen, boutique firms with 20 or 30 advisors that are managing hundreds of millions of dollars for executives, enterprise wealth management firms that manage hundreds of billions of dollars in assets for all kinds of different clients,” in his words — and custom tailor its insights for them, so their human advisor could talk them through what was happening with their money.
Leveraging market data and individual client goals Vasavada also noted that Vise Intelligence was trained on “tens of thousands of data points from the market on different companies, fundamentals of companies, broad market trends” and that this training was combined with information from each client in a secure inference.
“This is previous information and forward looking information,” Vasavada clarified, including the client’s previous positions, trades, gains and losses, as well as their financial goals, risk tolerance, retirement date, other financial milestones such as sending kids to college or purchasing homes.
All this is combined, in turn, with guidance from the financial advisor telling Vise Intelligence what kinds of strategies and investment opportunities the advisor wishes to follow to fulfill their clients’ goals.
“Think about it as inputs from the market, the client and the advisor,” Vasavada said.
Vise did not specify how it secures client data, but Vasavada said “the data is very secure and protected.” Ultimately, the company believes that “wealth management is going through a transformation,” according to Vasavada, one wherein “technology and investment management are no longer going to be separate.” And Vise wants to be the one financial advisors turn to when looking to serve more clients with a human touch.
The ideal scenario for Vise’s financial advisor clients is that “you went from managing 100 clients before to being able to manage 150 clients, and all of your time is being spent on the thing you love doing, which is managing and building client relationships,” the CEO told VentureBeat.
That kind of scaling, of course, also benefits Vise, which charges a percentage of an advisor’s total assets under management on their platform, according to their Form ADV.
Updated and corrected on Oct. 2, 2023 at 1:46 pm to clarify that Vise Intelligence is included in the pricing of the current platform, that Vise charges for assets-under-management, not portfolio performance, and that Vise continues to offer its original services.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,082 | 2,023 |
"Slope raises $30M for AI-powered B2B payments platform | VentureBeat"
|
"https://venturebeat.com/ai/sam-altman-backed-slope-raises-30m-for-ai-powered-b2b-payments-platform"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sam Altman-backed Slope raises $30M for AI-powered B2B payments platform Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Tracking and ensuring the integrity of payments is no easy task for any business, but in the B2B space, there is the added difficulty of ensuring the companies you are selling to will actually make their payments on time, won’t stiff you on some baseless excuse, go out of business, or try to wriggle out of them through clever accounting, bankruptcy, litigation, or other means. These issues are far rarer in the B2C space.
How can B2B companies, such as wholesalers selling products in bulk, ensure that they will be paid for the products and services they are rendering to their customers? Slope , a two-year-old AI startup founded in San Francisco, is attempting to create the gold standard: a B2B payments tracking and receiving platform that is powered in part by its own “rules-based” tech and partly by OpenAI’s GPT-3.5 Turbo. Slope is also developing its own proprietary, in-house large language models (LLMs).
The company today announced a $30 million equity round led by Fred Wilson’s famed Union Square Ventures with participation from OpenAI CEO and co-founder Sam Altman , for a total funding to date of $187 million. That’s hefty cash for a lean team of just 18 full-time employees.
“We run very efficiently,” said Lawrence Lin Murata, Slope’s CEO and co-founder, in a video call with VentureBeat. Lin Murata said he experienced firsthand the challenges B2B vendors face from working at the wholesale goods business operated by his parents in their home country, Brazil.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “For B2B businesses, it’s their lifeblood to be able to get money fast and send money to their vendors,” he added.
Slope’s technology informs the entire B2B customer payments journey The company will use the cash to continue building out its team and technology — which includes an online payments and invoicing tool that Slope’s customers can use to accept payments from their customers, including credit card, ACH (automated clearing house), and international payments.
“We start from customer onboarding, risk assessment, and go all the way to reconciling, including everything in between,” said Alice Deng, co-founder and chief product officer of Slope.
That includes Slope analyzing a buyer’s “credit risk, invoicing, billing, cash application claims, and reductions and all the way to syncing into [a B2B customers’] accounting system — it’s all handled in our platform,” Deng added.
Slope further offers financing for its customers’ customers, allowing those who can’t pay upfront to be granted credit directly through Slope’s payments system.
It also delivers a newfound level of “visibility,” into B2B payment workflows that can be “old school” and more obscure, according to Deng.
One example of this newfound visibility is Slope Timeline, a feature that uses Slope’s understanding of a business’s transactions to keep the B2B vendor and their customers informed of payment and product shipping statuses in near realtime.
“Both the buyer and the seller understand exactly what stage there are in to the millisecond,” said Deng. That’s a marked improvement from businesses traditionally wondering, “oh, is my order open yet, did it ship? Did my wire transfer get reconciled?” A foundation of ‘clean data’ Key to Slope’s approach to understanding, providing updated information on, and helping de-risk B2B payments for its enterprise customers is its focus on obtaining “clean data” from them.
“The foundation of that is clean data, it powers everything in the system,” Deng said. “We’re an AI company, but we’re actually a clean data company.” To achieve “clean data,” Slope works with its enterprise customers to gather all of the data about the orders those customers are receiving, processing, and shipping out.
“That data is formatted and surfaced in ways that are useful to the customer within our platform,” said Lin Murata.
SlopeGPT leverages GPT in a novel way, turning enterprise transaction data into a fraud risk analysis tool Here’s where some of Slope’s AI approach comes into play: Slope is able to assess a B2B buyer’s creditworthiness and their fraud risk, and extend them the optimal credit to incur the minimal risk on the seller/Slope’s customer, using SlopeGPT , a tool unveiled in April.
SlopeGPT takes a Slope enterprise customer’s transaction and purchase order data, runs it through a dedicated instance of OpenAI’s GPT (not on the public internet or accessible to third-parties), and clusters the data into embeddings that can determine which types of payments are regular and which are anomalous. Slope then uses these embeddings, and other rules-based data management techniques, to surface relevant data and suggestions to both its customers and their customers.
When analyzing a buyer’s risk for granting them financing/credit, SlopeGPT can also look for clues.
“If there’s anomalous activities, or somebody’s pretending to be a different business, if they’ve stolen information from another business…if they’re intentionally trying to show strong cash flow, and then do sudden transfers out, we can detect a lot of these anomalies to prevent payments,” Lin Murata said.
Slope discovered the power of GPT for this purpose by feeding it is own 2.5 million bank transactions over 18 months.
The company has also developed its own proprietary LLM — trained on public data — that performs even better at accurately identifying risk and will be released soon, according to the founders.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,083 | 2,023 |
"OpenAI gives ChatGPT access to the entire internet | VentureBeat"
|
"https://venturebeat.com/ai/openai-gives-chatgpt-access-to-the-entire-internet"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI gives ChatGPT access to the entire internet Share on Facebook Share on X Share on LinkedIn VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
OpenAI’s ChatGPT has been an undoubtedly powerful and interesting tool since its release in November 2022, but it has been limited with the domain of its knowledge — which only included information up to September 2021. But that changes today.
OpenAI just announced on X (formerly Twitter) that ChatGPT “can now browse the internet to provide you with current and authoritative information, complete with direct links to sources,” thanks to an integration with Microsoft’s Bing search engine.
ChatGPT can now browse the internet to provide you with current and authoritative information, complete with direct links to sources. It is no longer limited to data before September 2021.
pic.twitter.com/pyj8a9HWkB The company said the capability was now available to ChatGPT Plus subscribers and ChatGPT Enterprise users, and could be chosen with the drop-down menu under the GPT-4 selector at the top of the application.
It actually marks a return to web browsing for ChatGPT. Back in March, when OpenAI debuted ChatGPT third-party plugins , it also announced two of its own plugins — Code Interpreter (which has since been renamed “ Advanced Data Analysis ” and allows ChatGPT to accept uploaded files), and “ Browsing ” which used the Microsoft Bing API and a text-based browser to search the web and summarize information for users, complete with superscript citations that the user could hover over and click to visit the source website.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Yet, within a few days, OpenAI disabled the browsing feature as users were able to deploy it to bypass the paywalls of leading news publishers. It appears the feature has returned with those sites excluded, because, according to OpenAI, the ChatGPT browsing feature now recognizes the “robots.txt” code that website owners can add to exclude Google and other web crawlers from searching and indexing their content.
Still, the return of web browsing of public, non-paywalled sites was heralded by the company’s leadership on their personal X accounts, with CEO Sam Altman posting “we are so back,” and CTO Mira Murati echoing the sentiment.
It should also be noted that Microsoft’s Bing Chat, introduced in February , is powered by “a new, next-generation OpenAI large language model that is more powerful than ChatGPT” and has since then included the ability to browse the web with ChatGPT-style functionality and citations as well.
So what are the differences, if any, between using Bing Chat powered by OpenAI and using ChatGPT browsing with Bing Chat? Sources with knowledge of the situation told VentureBeat that the ChatGPT interface allows users to take advantage of browsing without leaving the ChatGPT interface and its many other features.
The new ChatGPT browsing capabilities come just two days after OpenAI also announced the ability for ChatGPT to scan and analyze images and conduct conversations over audio , including analyzing a user’s uploaded audio and speaking back to the user in a generated voice. Last week, OpenAI further announced its new image generation model DALL-E 3 , which it said had been rewritten to take advantage of ChatGPT’s natural language processing and conversational skills.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,084 | 2,023 |
"Europe's largest seeded startup Mistral AI releases first model | VentureBeat"
|
"https://venturebeat.com/ai/mistral-ai-europe-startup-releases-mistral-7b-model"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Europe’s largest seeded startup Mistral AI releases first model, outperforming Llama 2 13B Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Mistral AI , the six-month-old Paris-based startup that made headlines with its unique Word Art logo and a record-setting $118 million seed round — reportedly the largest seed in the history of Europe — today released its first large language AI model , Mistral 7B.
The 7.3 billion parameter model outperforms bigger offerings, including Meta’s Llama 2 13B (one of the smaller of Meta’s newer models), and is said to be the most powerful language model for its size (to date).
It can handle English tasks while also delivering natural coding capabilities at the same time – making another option for multiple enterprise-centric use cases.
Mistral said it is open-sourcing the new model under the Apache 2.0 license, allowing anyone to fine-tune and use it anywhere (locally to cloud) without restriction, including for enterprise cases.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Meet Mistral 7B Founded earlier this year by alums from Google’s DeepMind and Meta, Mistral AI is on a mission to “make AI useful” for enterprises by tapping only publicly available data and those contributed by customers.
Now, with the release of Mistral 7B, the company is starting this journey, providing teams with a small-sized model capable of low-latency text summarisation, classification, text completion and code completion.
While the model has just been announced, Mistral AI claims to already best its open source competition. In benchmarks covering a range of tasks, the model was found to be outperforming Llama 2 7B and 13B quite easily.
For instance, in the Massive Multitask Language Understanding (MMLU) test, which covers 57 subjects across mathematics, US history, computer science, law and more, the new model delivered an accuracy of 60.1%, while Llama 2 7B and 13B delivered little over 44% and 55%, respectively.
Similarly, in tests covering commonsense reasoning and reading comprehension, Mistral 7B outperformed the two Llama models with an accuracy of 69% and 64%, respectively. The only area where Llama 2 13B matched Mistral 7B was the world knowledge test, which Mistral claims might be due to the model’s limited parameter count, which restricts the amount of knowledge it can compress.
“For all metrics, all models were re-evaluated with our evaluation pipeline for accurate comparison. Mistral 7B significantly outperforms Llama 2 13B on all metrics, and is on par with Llama 34B (on many benchmarks),” the company wrote in a blog post.
As for coding tasks, while Mistral calls the new model “vastly superior,” benchmark results show it still does not outperform the finetuned CodeLlama 7B.
The Meta model delivered an accuracy of 31.1% and 52.5% in 0-shot Humaneval and 3-shot MBPP (hand-verified subset) tests, while Mistral 7B sat closely behind with an accuracy of 30.5% and 47.5%, respectively.
High-performing small model could benefit businesses While this is just the start, Mistral’s demonstration of a small model delivering high performance across a range of tasks could mean major benefits for businesses.
For example, in MMLU, Mistral 7B delivers the performance of a Llama 2 that would be more than 3x its size (23 billion parameters). This would directly save memory and provide cost benefits – without affecting final outputs.
The company says it achieves faster inference using grouped-query attention (GQA) and handles longer sequences at a smaller cost using Sliding Window Attention (SWA).
“Mistral 7B uses a sliding window attention (SWA) mechanism, in which each layer attends to the previous 4,096 hidden states. The main improvement, and reason for which this was initially investigated, is a linear compute cost of O(sliding_window.seq_len). In practice, changes made to FlashAttention and xFormers yield a 2x speed improvement for a sequence length of 16k with a window of 4k,” the company wrote.
The company plans to build on this work by releasing a bigger model capable of better reasoning and working in multiple languages, expected to debut sometime in 2024.
For now, Mistral 7B can be deployed anywhere (from locally to AWS, GCP or Azure clouds) using the company’s reference implementation and vLLM inference server and Skypilot VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,085 | 2,023 |
"Meta quietly releases Llama 2 Long AI model | VentureBeat"
|
"https://venturebeat.com/ai/meta-quietly-releases-llama-2-long-ai-that-outperforms-gpt-3-5-and-claude-2-on-some-tasks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meta quietly unveils Llama 2 Long AI that beats GPT-3.5 Turbo and Claude 2 on some tasks Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Meta Platforms showed off a bevy of new AI features for its consumer-facing services Facebook, Instagram and WhatsApp at its annual Meta Connect conference in Menlo Park, California, this week.
But the biggest news from Mark Zuckerberg’s company may have actually come in the form of a computer science paper published without fanfare by Meta researchers on the open access and non-peer reviewed website arXiv.org.
The paper introduces Llama 2 Long, a new AI model based on Meta’s open source Llama 2 released in the summer , but that has undergone “continual pretraining from Llama 2 with longer training sequences and on a dataset where long texts are upsampled,” according to the researcher-authors of the paper.
As a result of this, Meta’s newly elongated AI model outperforms some of the leading competition in generating responses to long (higher character count) user prompts, including OpenAI’s GPT-3.5 Turbo with 16,000-character context window , as well as Claude 2 with its 100,000-character context window.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Meta introduces LLAMA 2 Long – context windows of up to 32,768 tokens – the 70B variant can already surpass gpt-3.5-turbo-16k’s overall performance on a suite of long-context tasks https://t.co/uzsVslLUkX pic.twitter.com/aXyPmeLXMo How LLama 2 Long came to be Meta researchers took the original Llama 2 available in its different training parameter sizes — the values of data and information the algorithm can change on its own as it learns, which in the case of Llama 2 come in 7 billion, 13 billion, 34 billion, and 70 billion variants — and included more longer text data sources than the original Llama 2 training dataset. Another 400 billion tokens-worth, to be exact.
Then, the researchers kept the original Llama 2’s architecture the same, and only made a “necessary modification to the positional encoding that is crucial for the model to attend longer.” That modification was to the Rotary Positional Embedding (RoPE) encoding, a method of programming the transformer model underlying LLMs such as Llama 2 (and LLama 2 Long), which essentially maps their token embeddings (the numbers used to represent words, concepts, and ideas) onto a 3D graph that shows their positions relative to other tokens, even when rotated. This allows a model to produce accurate and helpful responses, with less information (and thus, less computing storage taken up) than other approaches.
The Meta researchers “decreased the rotation angle” of its RoPE encoding from Llama 2 to Llama 2 Long, which enabled them to ensure more “distant tokens,” those occurring more rarely or with fewer other relationships to other pieces of information, were still included in the model’s knowledge base.
Using reinforcement learning from human feedback (RLHF) , a common AI model training method where AI is rewarded for correct answers with human oversight to check it, and synthetic data generated by Llama 2 chat itself, the researchers were able to improve its performance in common LLM tasks including coding, math, language understanding, common sense reasoning, and answering a human user’s prompted questions.
With such impressive results relative to both Llama 2 regular and Anthropic’s Claude 2 and OpenAI’s GPT-3.5 Turbo, it’s little wonder the open-source AI community on Reddit and Twitter and Hacker News have been expressing their admiration and excitement about Llama 2 since the paper’s release earlier this week — it’s a big validation of Meta’s “open source” approach toward generative AI, and indicates that open source can compete with the closed source, “pay to play” models offered by well-funded startups.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,086 | 2,023 |
"Meta announces 'universe of AI' for Instagram, Facebook, WhatsApp | VentureBeat"
|
"https://venturebeat.com/ai/meta-announces-universe-of-ai-for-instagram-facebook-whatsapp"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meta announces ‘universe of AI’ for Instagram, Facebook, WhatsApp Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
At a keynote speech at the annual Meta Connect conference today, Mark Zuckerberg announced that Meta is launching massive AI updates across the company’s applications and devices, including Instagram, Facebook and WhatsApp, building “state-of-the-art AI into the apps that billions of people use.” The new AI experiences and features, providing, “different AIs for different things,” include: Meta AI chatbot A beta rollout of an “advanced conversational assistant” available on WhatsApp, Messenger and Instagram, and coming to Ray-Ban Meta smart glasses and Quest 3. In the US, Meta AI will provide real-time information (thanks to a search partnership with Microsoft Bing), as well as a tool for image generation courtesy of a new image model called Emu (Expressive Media Universe). Meta AI is powered by a custom model that leverages technology from the company’s open source LLM, Llama 2.
An AI ‘cast of characters’ Meta launched 28 AIs in beta with unique interests and personalities — some are played by cultural icons and influencers, including Snoop Dogg, Tom Brady, Kendall Jenner, and Naomi Osaka. Meta’s press materials call these a ‘ new cast of characters – all with unique backstories.’ VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI Studio Platform Meta launched the AI Studio platform for businesses to build AI chatbots for the company’s messaging services, including Facebook, Instagram and Messenger. Meta also said in the coming year it will launch a sandbox tool to “enable anyone to experiment with creating their own AI.” Generative AI stickers across apps Soon users will be able to edit images and co-create them with friends on Instagram using new AI editing tools, restyle and backdrop. The tool uses Llama 2 and Meta’s new image generation model, Emu , and turns text prompts into stickers in just a few in seconds. This new feature roll out to select English-language users over the next month in WhatsApp, Messenger, Instagram, and Facebook Stories.
Ray-Ban Smart Glasses have Meta AI built in By saying “Hey Meta,” you can engage with Meta AI to “spark creativity, get information, and control your glasses—just by using your voice.” Starting at $299 USD, the glasses collection launches on October 17 and is available for pre-order today.
Zuckerberg cautioned that the Meta AI product launches are “early stuff,” saying they “still have lots of limitations, which is apparent when you use the — unlike Meta AI that don’t have access to real time information — there’s just a lot to improve as get more feedback.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,087 | 2,023 |
"How AI can be a 'multivitamin supplement' for many industries | VentureBeat"
|
"https://venturebeat.com/ai/how-ai-can-be-the-multivitamin-supplement-for-many-industries"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How AI can be a ‘multivitamin supplement’ for many industries Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
No matter where you are, AI, ChatGPT and related tools are the most popular topics of conversation. Statistics show that the global generative AI market is growing at a CAGR of just over 27% and will surpass $22 billion by 2025.
Now, the question is how it will change the way we operate? You are here: The current state of AI Because nobody knows what technology can and cannot do for us yet, we are still in the “try it on everything and see what sticks” phase. Some businesses have laid off workers or cut budgets because they believe that AI can do their jobs better, faster or cheaper. For example, the German publication Bild recently cut about 200 jobs due to automation and “reorganization.” Instead of taking the route of eliminating jobs, companies should focus on leveraging AI-human synergy to boost entrepreneurship and business growth. This path leads to job creation, more security for — and loyalty from — existing employees and a better overall job market. Working together with AI means laborers have jobs that are easier and more plentiful, freeing businesses to grow, expand and experiment without worrying about their bottom line as much.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! We have learned that AI is great for aggregating, parsing and analyzing data. It’s also a good frontline customer service stand-in, providing round-the-clock basic assistance for customers in nearly every industry.
Gen AI for a more “human” first-line customer service experience has already been tested with positive results, and it’s a simple, effective way for other companies to “get their feet wet” with the technology.
The most important thing to understand about the current state of AI in business is that now is the time to take risks and innovate. There is no “right” or “wrong” way to implement AI yet, so businesses of all sizes have a unique opportunity to trailblaze new and effective applications for AI technology.
Peering into the proverbial crystal ball: Predictions for an AI-powered future AI will be the “multivitamin supplement” of many industries. It won’t replace humans in the same way that supplements don’t replace a healthy diet. Still, it will strengthen companies’ existing operations and fill in the gaps that are currently making work more burdensome for human laborers.
People will do fewer routine tasks and instead fill supervisory roles for automation and robotics. It’s exciting to realize that there will soon be professions that we don’t even have names for yet. As the technology ages and matures and governing bodies create the necessary laws and regulations, our current state of uncertainty will transform into an exciting, bright new future of human-tech cooperation.
We are already seeing this future take shape. For instance, MarTech companies are testing AI-powered fraud detection to supplement the work that human experts do to monitor traffic quality and transparency. This not only eases the human workload but helps companies save resources while getting better results overall.
Similar benefits of human-AI collaboration can be seen in healthcare, with AI that can be trained to assist patients with recovery treatments or perform routine tasks in medical offices or hospitals, freeing nurses and doctors up to focus on patient outcomes. It’s present in warehouses, manufacturing and q-commerce as well, and I think the future will see even more cooperative roles that we don’t even have a frame of reference for yet.
The most likely future for AI is treating it as a “joint effort” technology that requires humans to reach its full potential, so these types of collaborative applications are what corporations should focus on when considering new AI integrations.
How AI affects startups: The investor’s perspective With equal parts excitement and uncertainty, startups are eager to find the most unique and promising applications for AI. One of the most beautiful things about this AI-powered world we’re moving into is that nobody really knows what the “right” direction is, so founders are free to be as innovative as possible.
Startups are often focused on capitalizing on the trends of the moment, and AI is no exception. We’ve seen big companies like Microsoft make massive investments in AI applications, and we’ve also seen ambitious startups bet on AI’s capabilities. Still, the truth is that many other corporations are slow-moving and not yet ready to take significant risks with the technology. This is where startups can come in to fill those riskier innovation gaps.
It’s a race to impress investors right now, so I think we’ll see these companies pop up like mushrooms over the next year or two. Founders are looking for ways to launch projects centered around ideas that already exist in the world but can be applied in AI.
Entrepreneurs seeking capital right now are likely wondering if investors will be evaluating them based on whether they have the capacity for AI expansion. The truth is that it is better for startups to build AI into their business plans. The technology is here to stay, and it is more attractive to investors to see a startup with long-term goals that incorporate AI in some form. However, it’s just as important to have ready answers to questions about ethical implementation and fair use because these topics are already being spotlighted as potential problems with the technology.
Alexander Bachmann is founder and CEO of Mitgo and has more than two decades of experience in the MarTech industry.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,088 | 2,023 |
"Exclusive: Microsoft opens AI Co-Innovation Lab in San Francisco to empower Bay Area startups | VentureBeat"
|
"https://venturebeat.com/ai/exclusive-microsoft-opens-ai-co-innovation-lab-in-san-francisco-to-empower-bay-area-startups"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive: Microsoft opens AI Co-Innovation Lab in San Francisco to empower Bay Area startups Share on Facebook Share on X Share on LinkedIn Microsoft has announced the opening of a new AI Co-Innovation Lab in San Francisco at the address 555 California Street. (Image Credit: VentureBeat made with Midjourney) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Microsoft announced today in an exclusive VentureBeat report the opening of its fifth AI Co-Innovation Lab , located in downtown San Francisco at 555 California Street. The lab provides startups and enterprises with access to AI experts, tools and infrastructure to collaborate on developing and testing AI prototypes and solutions.
“Artificial intelligence is one of the defining technologies of our time, and Microsoft is committed to empowering every person and organization, whether it’s a large enterprise or startup, to achieve more with AI,” Microsoft executive vice president of business development, strategy and ventures, Christopher Young, told VentureBeat.
The lab is strategically located in the heart of San Francisco due to the concentration of AI innovation occurring there, especially among startups, he explained. “San Francisco is strategically important because there’s so much innovation in AI happening in San Francisco, specifically in the city, largely related to the startup community,” Young said.
The lab’s main goal is to facilitate the transition from ideation to prototyping, providing companies with the resources and guidance they need to refine their AI-based concepts. Microsoft has previously invested through its M12 Venture Fund in startups, such as Typeface and Hidden Layer , which are focused on AI and cybersecurity, respectively.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Young cited the example of Space and Time , a startup that collaborated with Microsoft through the Co-Innovation Lab. They leveraged SQL Server and combined SQL with access to Web3 data. “What we brought to them through the work that we did together in the lab, was to bring natural language to that, leveraging the power of generative AI, to simplify the complex SQL queries,” he said.
Stoking innovative fires in the AI community Microsoft runs Co-Innovation Labs around the world, including at its headquarters in Redmond, Washington. The labs provide hands-on coaching to help companies take an idea from concept to prototype and testing. According to Young, the San Francisco innovation lab will “help organizations make their AI opportunities real.” This move by Microsoft underlines the company’s commitment to stoking innovative fires within the AI community. The Co-Innovation Lab in San Francisco is an integral part of this strategy, particularly given the city’s reputation as a hotbed for technological innovation.
The lab also serves as a way for Microsoft to engage directly with the local startup community. “We’ve seen a lot of demand for just more opportunities for the San Francisco community, the startup community, in particular, to be able to work very closely with us on many of the different AI projects that different companies, again, of all different sizes are working on,” Young said.
The launch of the lab is also about forging new relationships. “We’re looking forward to having those customers and partners join us in San Francisco,” Young added. The focus on partnerships and collaborations is a clear indicator of Microsoft’s belief in the collective power of innovation.
In a technology landscape that’s rapidly evolving, Microsoft’s Co-Innovation Lab could play a critical role in shaping the future of AI development. By providing a space for both startups and more established companies to experiment and grow, Microsoft is positioning itself at the forefront of AI development while simultaneously fostering a culture of innovation and collaboration.
As the lab begins to churn out its first prototypes and concepts, the tech world will undoubtedly be watching closely. For decision-makers, the new Co-Innovation Lab could serve as an invaluable resource for AI-oriented innovation, providing a platform for testing, refining, and potentially commercializing their AI concepts. Interested startups and entrepreneurs can apply right here starting today.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,089 | 2,023 |
"Enterprise-focused AI startup Cohere launches chatbot API | VentureBeat"
|
"https://venturebeat.com/ai/enterprise-focused-ai-startup-cohere-launches-demo-chatbot-coral-and-chat-api"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Enterprise-focused AI startup Cohere launches demo chatbot Coral and Chat API Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Toronto, Canada-based Cohere , founded by ex-Googlers, has emerged as one of the leading startups amid the increasingly crowded generative AI marketplace with its focus on developing foundation models and other AI-powered technologies for enterprises.
Today, the company jumped into the fray of the AI chatbot race by releasing a new application programming interface (API) , allowing third-party developers of other enterprises to build chat applications based off Cohere’s proprietary large language model (LLM), Command.
“Whether you’re building a knowledge assistant or customer support system, the Chat API makes creating reliable conversational AI products simpler,” wrote Cohere in a blog post announcing the service. It joins existing APIs from Cohere for content generation (Generate) and text summarization (Summary).
In addition, Cohere has provided its own free chatbot demo on the web, the Coral Showcase , to allow users to test out its chatbot on their own. However, you’ll need to sign in with your Google or Cohere credentials to access the environment. Coral initially introduced the Coral chatbot for customers in July , however, the API allows them to build it into their own internal or external-facing apps.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In VentureBeat’s tests of the system, the Coral chatbot powered by Command was noticeably slower than some competing, closed-source chatbots like OpenAI’s ChatGPT or Anthropic’s Claude 2 at returning responses, taking at least two or more seconds to generate them. However, the responses were largely accurate and up-to-date, and clearly written and did not contain visible hallucinations.
It also cited sources and included links back to them. However, it failed to find some of the most recent information when asked about a specific company.
RAG Time Cohere touted the fact that its new chatbot API features Retrieval-Augmented Generation (RAG) , a method of controlling a chatbot’s information sources, allowing developers to constrain them to their own enterprise data, or expand them to scan the entire world wide web, while still taking advantage of the chatbot’s original training and power to both interpret and generate text in natural language.
As Cohere writes in its blog post announcing the new Chat API, “RAG systems improve the relevance and accuracy of generative AI responses by incorporating information from data sources that were not part of pre-trained models.” In the case of Cohere’s new Chat API with RAG, there are only two supported sources of additional information developers can add: a web search implementation or plain text documents from their enterprise (or another source).
“For example, a developer building a market research assistant can equip their chatbot with a web search to access the latest news about trends and competitors in their space,” wrote Cohere in its blog post, later noting, “We train Command specifically to perform well on RAG tasks. This means you can expect high levels of performance from Cohere’s model.” Yet based on VentureBeat’s limited initial tests, the reliability was not always up to what we might expect from a market research assistant, failing to return some current news. However, our tests were extremely limited and only consisted of a few queries so far.
More features available now and coming up In addition to the RAG-enabled Chat API, Cohere noted that its platform also allows third-party developers to connect three modular components from the startup.
These include a “document mode” allowing the developer to specify which documents they want their Cohere-powered chatbot experience to reference when answering user prompts, a “query-generation mode” that instructs the chatbot to return search queries based on the information the user submits in their prompt, and a “connector mode,” that lets the developer connect their chatbot to the web or another information source.
Cohere also noted that it plans to expand this connector/modular ecosystem.
The announcement comes hot on the heels of rival OpenAI’s move to reintroduce web browsing capabilities to ChatGPT yesterday, after a long period in which they were restricted following an initial March 2023 release and users bypassing website paywalls with the capability, as well as OpenAI’s earlier move to court enterprise users more directly with the announcement of its ChatGPT for Enterprise subscription service tier.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,090 | 2,023 |
"Canada wants to be the first country to implement AI regulations | VentureBeat"
|
"https://venturebeat.com/ai/canada-ai-code-of-conduct"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Canada wants to be the first country to implement AI regulations: Minister of Innovation Share on Facebook Share on X Share on LinkedIn Credit: Bryson Masse/VentureBeat Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Canada aims to be the first country in the world with official regulations covering the emerging artificial intelligence sector, said François-Philippe Champagne, Canada’s Minister of Innovation, Science and Industry in a speech on Wednesday.
“The world is looking at us to lead in how we’re going to define the guardrails that are going to be put in place here in Canada and inspire the rest of the world,” he said.
In his remarks at the ALL IN conference on artificial intelligence regulations in Montreal, Quebec, Champagne noted that “AI is in the minds of everyone, but also in the minds of leaders around the world, and they expect us to act.” An emerging national AI strategy Canada doubled down on its national AI strategy this year.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Canada will now have a voluntary AI code of conduct which is going to be focused on advanced generative AI,” said Champagne.
The code of conduct , which several major Canadian AI companies including the white-hot enterprise AI startup Cohere , Coveo and Ada, as well as larger enterprises like Blackberry and OpenText have signed on to , aims to “demonstrate to Canadians that the systems that they’re using are going to be safe and certainly further public interest.” It is intended to build trust while national legislation is developed.
The code of conduct follows lawmakers’ introduction of bill C-27 last year, also known as the Digital Charter Implementation Act, an effort to modernize privacy laws and establish regulations around AI usage as the tech advances and proliferates rapidly.
The bill aims to implement Canada’s new Digital Charter which focuses on protecting privacy and personal information online.
It updates Canada’s privacy laws for the first time in over 20 years to account for developments like facial recognition, emotion detection algorithms, and other new uses of data and artificial intelligence.
Bill C-27 would also establish a new federal Artificial Intelligence and Data Act (AIDA), which builds accountability measures for how companies manage and use Canadians’ personal data, creates rights around their data, and implements guidelines for the ethical development and application of AI technologies.
Proposed AI laws have proven controversial But some activists and even some tech industry leaders have criticized the Canadian government’s efforts so far — both the proposed bill and the voluntary code of conduct, for either doing too little to protect people’s rights, or for going too far in imposing onerous new red tape around innovation.
In a joint letter addressed to the Minister of Innovation, over 30 civil society organizations and experts have raised serious concerns that AIDA fails to adequately protect citizens’ rights and freedoms.
The letter expresses that AIDA as currently proposed puts economic interests above considerations of human rights impacts. Large definitional gaps and uncertainty are criticized for leaving major aspects of the law illegible and without substance.
Most worrying to some activists is the lack of any meaningful public consultation in the development of AIDA. International peers are noted as having done much more substantial cross-sectoral work to thoughtfully develop AI governance rules.
To address these shortcomings, the signatories are calling for the outright removal of AIDA from Bill C-27, under which it is currently proposed. This would allow time for AIDA to be properly scrutinized, reopened for public input, and improved through revisions before being brought forward again. Leaving AIDA as is, risks Canadians’ trust in the regulatory approach to such an important emerging technology.
To address these concerns, the Minister stated that through meetings with experts, “we realized that while we are developing a law here in Canada, it will take time and I think that if you ask people on the street, they want us to take action now to make sure that we have specific measures that companies can take now to build trust in their AI products.” The voluntary code of conduct is a response to these concerns.
Shopify CEO Tobi Lütke took to X, the social platform formerly known as Twitter, to voice his complaints that there isn’t “need for more referees in Canada.” In a meeting with the House of Commons Standing Committee on Industry, Science and Technology on Tuesday, the minister announced that further amendments to the bill will be coming to the legislation to address the issues raised by outside groups.
Canada has long record of AI involvement Canada has been proactively working to develop a framework for responsible AI. The Minister highlighted some of the key steps Canada has already taken, including launching the first national AI strategy in 2017 with almost $500 million in funding. This helped position Canada as a leader in AI from the start. Canada also co-founded the Global Partnership on AI (GPAI) in 2018 together with France to bring together experts to develop best practices on AI.
Internationally, the Minister said Canada is “actively engaged in what we call the Hiroshima AI process… and we’re working to make sure that we have a common approach with like minded countries to managing the arising opportunities coming from generative AI while also tackling the issues that our citizens want us to tackle.” Alignment with international partners is a priority, he said.
In his remarks, the Minister emphasized that “people expect us to come out of this summit with answers to their concerns, but also to demonstrate that the world the opportunities.” Updated, Thursday September 28, 9:33 am ET to correct a quote from François-Philippe Champagne that we originally erroneously reported as “industry” instead of “on the street.” We’ve since update the quote and regret the error.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,091 | 2,023 |
"As Meta brings AI to apps, Google Bard's fail offers cautionary tale | VentureBeat"
|
"https://venturebeat.com/ai/as-meta-brings-ai-to-apps-google-bards-fail-offers-cautionary-tale"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages As Meta brings AI to apps, Google Bard’s fail offers cautionary tale Share on Facebook Share on X Share on LinkedIn Meta AI Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
It was nearly impossible not to get caught up in the bubbly, colorful, Disneyland-like vibe of Meta’s Connect developer and creator conference yesterday, held for the first time in person since before the pandemic at Meta headquarters in Menlo Park, California.
Hearing the crowd clap during the event’s keynote every time Mark Zuckerberg announced another cool, mind-blowing or just plain adorable AI-driven product (AI stickers! AI characters! AI image of Zuck’s dog!) reminded me of being a kid watching the orca shows at Sea World in wonder: Ooh! The orca just clapped! Ahhh…look how high it can jump! That’s because Meta AI’s offerings were incredibly impressive — at least in their demo forms. Chatting with Snoop Dogg as a dungeon master on Facebook, Instagram or WhatsApp? Yes, please. Ray-Ban Smart Glasses with built-in voice AI chat? I’m totally in. AI-curated restaurant recommendations in my group chat? Where has this been all my life? But the interactive, playful, fun nature of Meta’s AI announcements — even those using tools for business and brand use — comes at a moment when the growing number of Big Tech’s fast-paced AI product releases, including last week’s Amazon Alexa news and Microsoft’s Copilot announcements — are raising concerns about security, privacy, and just plain-old tech hubris.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google Search exposed Bard conversations As VentureBeat’s Carl Franzen reported on Monday, after Google’s big update of Bard last week that earned mixed reviews, this week another, older Bard feature came under scrutiny — that Google Search had begun to index shared Bard conversational links into its search results pages, potentially exposing information users meant to be kept contained or confidential.
This means that if a person used Bard to ask it a question and shared the link with a spouse, friend or business partner, the conversation accessible at that link could in turn be scraped by Google’s crawler and show up publicly, to the entire world, in the search results.
There’s no doubt that this was a big Bard fail on what was meant to be a privacy feature — it led to a wave of concerned conversations on social media, and forced Google, which declined to comment to Fast Company on the record, to point to a tweet from Danny Sullivan, the company’s public liaison for search. “Bard allows people to share chats, if they choose,” Sullivan wrote. “We also don’t intend for these shared chats to be indexed by Google Search. We’re working on blocking them from being indexed now.” But will that be enough to convince users to continue to put their trust in Bard? Only time will tell.
ChatGPT as ‘wildly effective’ therapist — dangerous AI hype? Another concerning AI product moment of the week: As OpenAI CEO Sam Altman touted ChatGPT’s new voice mode and vision on X, Lilian Weng, head of safety systems at OpenAI, tweeted about her “therapy” session with ChatGPT: Weng received a wave of pushback for her comments about ChatGPT therapeutic use cases, but OpenAI cofounder and chief scientist Ilya Sutskever doubled down on the idea, saying that in the future we will have ‘wildly effective’ and ‘dirt cheap AI therapy’ that will ‘lead to a radical improvement in people’s experience of life.’ Given that Weng and Sutskever are not mental health experts or qualified therapists, this seems like a dangerous, irresponsible tack to take when these tools are about to be so widely adopted around the world — and actual lives can be impacted. Certainly it’s possible that people will use these tools for emotional support or a therapy of sorts — but does that mean therapy is a proper, responsible use case for LLMs and that the company marketing the tool (let alone the chief scientist developing it) should be promoting it as such? Seems like a lot of red flags there.
Meta takes AI fully mainstream Back to Meta: The company’s AI announcements seemed like the ultimate thus far in terms of bringing generative AI fully to the mainstream. Yes, Amazon’s latest Alexa LLM will be tied to the home, Microsoft’s Copilot is heading to nearly every office, while OpenAI’s ChatGPT started it all.
But AI chat in Facebook? AI-generated images in Instagram? Sharing AI chats, stickers and photos in WhatsApp? This will take the number of generative AI users into the billions. And with the dizzying speed of AI product deployment from Big Tech, I can’t help but wonder if none of us can properly comprehend what that really means.
I’m not saying that Meta is not taking its AI efforts seriously. Far from it, according to a blog post the company posted about its efforts to build generative AI features responsibly. The document emphasizes that Meta is building safeguards into its AI features and models before they launch them; will continue to improve the features as they evolve; and are “working with governments, other companies, AI experts in academia and civil society, parents, privacy experts and advocates, and others to establish responsible guardrails.” But of course, like everything else with AI these days, the consequences of these product rollouts remain to be seen — since they have never been done before. It seems like we are all in the midst of one massive RLHF — reinforcement learning with human feedback — experiment, as the world begins to use these generative AI products and features, at scale, out in the wild.
And just like with AI, scale matters: As billions of people try out the latest AI tools from Meta, Amazon, Google and Microsoft, there are bound to be more Bard-like fails coming soon. Here’s hoping the consequences are minor — like a simple chat with Snoop Dogg, the Dungeon Master, gone awry.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,092 | 2,023 |
"Quantum threats loom in Gartner's 2023 Hype Cycle for data security | VentureBeat"
|
"https://venturebeat.com/security/whats-new-in-gartners-hype-cycle-for-data-security-in-2023"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Quantum threats loom in Gartner’s 2023 Hype Cycle for data security Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The best-run organizations prioritize cybersecurity spending as a business decision first, and Gartner’s Hype Cycle for Data Security 2023 reflects the increasing dominance of this approach. Key technologies needed for assessing and quantifying cloud risk are maturing, and new technologies to protect against emerging threats are predicted to gain traction.
Business cases are driving data security integration and technology Gartner sees the core technologies needed to validate and quantify cyber-risk maturing quickly as more organizations focus on measuring their cybersecurity investments’ impact. CISOs tell VentureBeat that it is a new era of financial accountability, and that extends to new technologies for securing data stored in multicloud tech stacks and across networks globally. Getting control of cybersecurity costs is becoming a much higher priority as boards of directors look at how data security spending protects, and potentially grows, revenue.
Gartner’s latest Hype Cycle for data security dovetails with what CISOs, CIOs and their teams tell VentureBeat, especially in compliance-centric industries such as insurance, financial services, institutional banking and securities investments. Gartner added five new technologies this year: crypto-agility, post-quantum cryptography, quantum key distribution, sovereign data strategies and digital communications governance. Eight technologies have been removed or reassigned this year.
Getting integration right in data security at the enterprise level has always been a challenge. The need for more secure approaches to data integration has led to a proliferation of solutions over the years, some more secure than others. Gartner predicts these challenges will shift or consolidate data security technologies, including data security posture management (DSPM), data security platforms (DSPs) and multicloud database activity monitoring (DAM).
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! CISOs also say they are monitoring quantum computing as an evolving potential threat and have delegated monitoring it to their strategic IT planning teams. Gartner also introduced crypto-agility in this year’s Hype Cycle, responding to its clients’ requests for as much data and knowledge as possible in this area.
2023 key trends in data security CISOs and the teams they manage tell VentureBeat that protecting data in the cloud, and the many identities associated with each data source across multicloud configurations, is getting more challenging given the need to provide access rights by data type while still tracking compliance.
That’s made even more difficult by the exponential growth of machine identities across enterprises’ cloud instances. This year’s Hype Cycle for data security underscores this and other trends summarized here.
Data governance and risk management are now strategic priorities Board members regularly question CISOs about governance and risk management. CISOs tell VentureBeat that while board members know risk management at an expert level, they need to have the technology-based context of data governance and risk management defined from a tech stack and multicloud perspective.
These dynamics between boards and CISOs are playing out across hundreds of companies as data governance and risk management dominate Gartner’s discussions in this year’s Hype Cycle. Boards want to know how to accurately quantify cyber-risk, which drives greater compliance. CISOs say that financial data risk assessment (FinDRA) is board-driven and weren’t surprised it appears on the Hype Cycle.
Moving data to the cloud increases the need for data-in-use protection technologies Nearly every business relies on cloud services for a portion, if not all, of their infrastructure and application suites. Gartner sees this as a potential risk for data and has identified a series of technologies and techniques on the Hype Cycle to protect data in use and at rest.
These include confidentiality, homomorphic encryption, differential privacy and secure multiparty computation (SMPC). Confidentiality relies on hardware-based trusted execution environments to isolate data processing, while SMPC allows collaborative data analysis without exposing raw data. The presence of these data-in-use technologies on the Hype Cycle demonstrate the shift from data security at rest to data security in transit.
New quantum computing-based threats on the horizon Much has been written and predicted about when quantum computing will break encryption. In reality, no one knows when it will happen; however, there’s wide consensus that quantum technologies are progressing in that direction. CISOs VentureBeat interviewed on the topic see cryptography at varying levels of urgency depending on their business models, industries and how reliant they are on legacy encryption.
Gartner added both crypto-agility and post-quantum cryptography to the Hype Cycle for the first time this year. CISOs are pragmatic about technologies with as long a runway as these have. In previous interviews, CISOs told VentureBeat they could see where post-quantum cryptography could strengthen zero-trust frameworks in the long term.
New technologies added to the hype cycle Together, Gartner’s five new hype cycle technologies prepare CISOs for the next generation of quantum threats while addressing the most challenging aspects of governance and data sovereignty. The five newly added technologies are briefly summarized here: Crypto-agility The purpose of crypto-agility is to upgrade encryption algorithms used in applications and systems in real time, alleviating the risk of a quantum-based breach. Gartner writes that this will enable organizations to replace vulnerable algorithms with new post-quantum cryptography to ward off attacks using quantum computing to defeat encryption. Crypto-agility offers CISOs a path to secure encryption as quantum capabilities advance over the next five to seven years.
Post-quantum cryptography Gartner defines this new technology as based on new quantum-safe algorithms, such as lattice cryptography, that are resistant to decryption by quantum computers. The use case Gartner discusses in the Hype Cycle centers on using this technology in a pre-emptive strategy against quantum-based threats.
VentureBeat’s interviews with CISOs at financial trading firms revealed that pro-forma tech stacks already defend against quantum computing risks and threats. Gartner’s latest addition will likely be added to roadmaps for further evaluation by those CISOs responsible for commercial banking and other financial services and institutions. Leading vendors include Amazon, IBM and Microsoft.
Quantum key distribution (QKD) This technology works by using quantum physics principles, including photon entanglement, to create and exchange tamper-evident keys. Gartner considers QKD a niche technology today. But given its nature, uses in applications critical to national security are a natural extension of its strengths, as it’s anticipated to be useful for exchanging high-value data. Leading vendors include ID Quantique, MagiQ Technologies and Toshiba.
Sovereign data strategies This is a new addition to the Hype Cycle that supports data security governance, privacy impact assessment, financial data risk assessment (FinDRA) and data risk assessment. Sovereign data strategies reflect efforts by governments to provide strong governance and data security for their citizens and economy.
Privacy, security, access, use, retention, sharing regulations, processing and persistence are examples cited by Gartner. According to the firm, sovereign data strategies will eventually become table stakes for any business that needs to complete transactions across sovereign jurisdictions.
Digital communications governance Digital communications governance (DCG) solutions monitor, analyze and enforce employee messaging, voice and video compliance policies. DCG platforms also manage regulatory and corporate governance requirements with data retention, surveillance, behavioral analytics and e-discovery. They help compliance teams identify misconduct and comply with regulations by monitoring communications data.
DCG also helps CIOs and CISOs manage employee messaging, voice and video platform risks by consolidating access and enforcement across communication channels. Leading vendors include Global Relay, Proofpoint and Veritas.
Trends most strongly driving the future of data security Ten key trends emerge from this year’s Hype Cycle. Data governance, risk management and compliance are core drivers of the data security market. Gartner believes that preparing for quantum computing threats, convergence and integration of security tools, and managing unknown shadow IT data are high priorities.
The following matrix compares the most influential factors, in order of priority, that are influencing the future of data security.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,093 | 2,023 |
"5 ways CISOs can prepare for generative AI's security challenges | VentureBeat"
|
"https://venturebeat.com/security/5-ways-cisos-can-prepare-for-generative-ai-security-challenges-and-opportunities"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 5 ways CISOs can prepare for generative AI’s security challenges and opportunities Share on Facebook Share on X Share on LinkedIn Illustration by: Leandro Stavorengo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
With generative AI tools like ChatGPT proliferating across enterprises, CISOs have to strike a very difficult balance: Performance gains versus unknown risks. Gen AI is delivering greater precision to cybersecurity but also being weaponized into new attack tools such as FraudGPT that advertise their ease of use for the next generation of attackers.
Solving the question of performance versus risk is proving a growth catalyst for cybersecurity spending. The market value of gen AI-based cybersecurity platforms, systems and solutions is expected to rise to $11.2 billion in 2032 from $1.6 billion in 2022.
Canalys expects generative AI to support more than 70 % of businesses’ cybersecurity operations within five years.
Weaponized AI strikes at the core of identity security Gen AI attack strategies are focused on getting control of identities first. According to Gartner, human error in managing access privileges and identities caused 75% of security failures , up from 50% two years ago. Using gen AI to force human errors is one of the goals of attackers.
VentureBeat interviewed Michael Sentonas , president of CrowdStrike , to gain insights into how the cybersecurity leader is helping its customers take on the challenges of new, more lethal attacks that defy existing detection and response technologies.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Sentonas said that “the hacking [demo] session that [we] did at RSA [2023] was to show some of the challenges with identity and the complexity. The reason why we connected the endpoint with identity and the data that the user is accessing is because it’s a critical problem. And if you can solve that, you can solve a big part of the cyber problem that an organization has.” Cybersecurity leaders are up for the challenge Leading cybersecurity vendors are up for the challenge of fast-tracking gen AI apps through DevOps to beta and doubling down on their many models in development.
During Palo Alto Networks ‘ most recent earnings call , chairman and CEO Nikesh Arora emphasized the intensity the company is putting into gen AI, saying, “we’re doubling down, we’re quadrupling down to make sure that precision AI is deployed across every product. And we open up the floodgates of collecting good data with our customers for them to give them better security because we think that is the way we’re going to solve this problem to get real-time security.” Toward resilience against AI-based threats For CISOs and their teams to win the war against AI attacks and threats , gen AI-based apps, tools and platforms must become part of their arsenals. Attackers are out-innovating the most adaptive enterprises, sharpening their tradecraft to penetrate the weakest attack vectors. What’s needed is greater cyber-resilience and self-healing endpoints.
Absolute Software’s 2023 Resilience Index reveals how challenging it is to excel at the comply-to-connect trend. Balancing security and cyber-resilience is the goal, and the Index provides a useful roadmap. Cyber-resilience, like zero trust , is an ongoing framework that adapts to an organization’s changing needs.
Every CEO and CISO VentureBeat interviewed at RSAC 2023 said employee- and company-owned endpoint devices are the fastest-moving, hardest-to-protect threat surfaces. With the rising risk of gen AI-based attacks, resilient, self-healing endpoints that can regenerate operating systems and configurations are the future of endpoint security.
Five ways CISOs and their teams can prepare Central to being prepared for gen AI-based attacks is to create muscle memory of every breach or intrusion attempt at scale, using AI and machine learning (ML) algorithms that learn from every intrusion attempt. Here are the five ways CISOs and their teams are preparing for gen AI-based attacks.
Securing generative AI and ChatGPT sessions in the browser Despite the security risk of confidential data being leaked into LLMs, organizations are intrigued by boosting productivity with gen AI and ChatGPT. VentureBeat’s interviews with CISOs reveal that these professionals are split on defining AI governance.
For any solution to this problem to work, it must secure access at the browser, app and API levels to be effective.
Several startups and larger cybersecurity vendors are working on solutions in this area. Nightfall AI’s recent announcement of an innovative security protocol is noteworthy. The company’s customizable data rules and remediation insights help users self-correct. The platform gives CISOs visibility and control so they can use AI while ensuring data security.
Always scanning for new attack vectors and types of compromise SOC teams are seeing more sophisticated social engineering, phishing, malware and business email compromise (BEC) attacks that they attribute to gen AI. While attacks on LLMs and AI apps are nascent today, CISOs are already doubling down on zero trust to reduce these risks.
That includes continuously monitoring and analyzing gen AI traffic patterns to detect anomalies that could indicate emerging attacks and regularly testing and red-teaming systems in development to uncover potential vulnerabilities. While zero trust can’t eliminate all risks, it can help make organizations more resilient against gen AI threats.
Finding and closing gaps and errors in microsegmentation Gen AI’s potential to improve microsegmentation , a cornerstone of zero trust , is already happening thanks to startups’ ingenuity. Nearly every microsegmentation provider is fast-tracking DevOps efforts.
Leading vendors with deep AI and ML expertise include Akamai, Airgap Networks, AlgoSec, Cisco, ColorTokens, Elisity, Fortinet, Illumio, Microsoft Azure, Onclave Networks, Palo Alto Networks, VMware, Zero Networks and Zscaler.
One of the most innovative startups in microsegmentation is Airgap Networks, named one of the 20 best zero-trust startups of 2023.
Airgap’s approach to agentless microsegmentation reduces the attack surface of every network endpoint, and it is possible to segment every endpoint across an enterprise while integrating the solution into an existing network with no device changes, downtime or hardware upgrades.
Airgap Networks also introduced its Zero Trust Firewall (ZTFW) with ThreatGPT , which uses graph databases and GPT-3 models to help SecOps teams gain new threat insights. The GPT-3 models analyze natural language queries and identify security threats, while graph databases provide contextual intelligence on endpoint traffic relationships.
“With highly accurate asset discovery, agentless microsegmentation and secure access, Airgap offers a wealth of intelligence to combat evolving threats,” Airgap CEO Ritesh Agrawal told VentureBeat. “What customers need now is an easy way to harness that power without any programming. And that’s the beauty of ThreatGPT — the sheer data-mining intelligence of AI coupled with an easy, natural language interface. It’s a game-changer for security teams.” Guarding against generative AI-based supply chain attacks Security is often tested right before deployment, at the end of the software development lifecycle (SDLC). In an era of emerging gen AI threats, security must be pervasive throughout the SDLC, with continuous testing and verification. API security must also be a priority, and API testing and security monitoring should be automated in all DevOps pipelines.
While not foolproof against new gen AI threats, these practices significantly raise the barrier and enable quick threat detection. Integrating security across the SDLC and improving API defenses will help enterprises thwart AI-powered threats.
Taking a zero-trust approach to every generative AI app, platform, tool and endpoint A zero-trust approach to every interaction with AI tools, apps and platforms and the endpoints they rely on is a must-have in any CISO’s playbook. Continuous monitoring and dynamic access controls must be in place to provide the granular visibility needed to enforce least privilege access and always-on verification of users, devices and the data they’re using, both at rest and in transit.
CISOs are most worried about how gen AI will bring new attack vectors they’re unprepared to protect against. For enterprises LLMs, protecting against query attacks, prompt injections, model manipulation and data poisoning are high priorities.
Preparing for genera tive AI attacks with zero trust CISOs, CIOs and their teams are facing a challenging problem today. Do gen AI tools like ChatGPT get free reign in their organizations to deliver greater productivity, or are they bridled in and controlled, and if so, by how much? Samsung’s failure to protect IP is still fresh in the minds of many board members.
One thing everyone agrees on, from the board level to SOC teams, is that gen AI-based attacks are increasing. Yet no board wants to jump into capital expense budgeting, especially given inflation and rising interest rates. The answer many are arriving at is accelerating zero-trust initiatives. While an effective zero-trust framework isn’t stopping gen AI attacks completely, it can help reduce their blast radius and establish a first line of defense in protecting identities and privileged access credentials.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,094 | 2,023 |
"NPCx raises $3M for better game character mocap | VentureBeat"
|
"https://venturebeat.com/games/npcx-raises-3m-for-better-game-character-mocap"
|
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture NPCx raises $3M for better game character mocap Share on Facebook Share on X Share on LinkedIn NPCx is working on smarter NPCs.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
NPCx has raised $3 million in funding for improving motion capture for non-player characters (NPCs) in games.
Kakao Investment led the round to strengthen NPCx’s position within the gaming and entertainment industry and propel it to create more realistic character movements using AI-powered products.
St. Petersburg, Florida-based NPCx launched its flagship product in March with the debut of TrackerX. This motion capture processing tool disrupts the conventional and labor-intensive process of tracking raw 3D point cloud data, the company said. By seamlessly integrating with any optical or sensor-based motion capture system, TrackerX simplifies the workflow by directly applying the captured data onto the TrackerX character skeleton.
Cameron Madani, CEO of NPCx, said in a statement, “TrackerX disrupts this costly manual process that’s been around for nearly thirty years by cleaning raw motion capture with AI and proprietary biomechanical models, saving companies thousands of labor hours and significant financial resources per project.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The new round will fuel the development of NPCx’s pioneering product, BehaviorX. This technology aims to enhance gaming experiences by capturing and utilizing real-time data from players to create lifelike behavioral clones in NPCs. By analyzing player behavior and translating it into realistic NPC actions, BehaviorX promises to level up the immersion and engagement in gaming.
In addition to BehaviorX, NPCx has plans to launch two other innovative products: RetargetX and AIMX. Both products harness the power of neural networks to retarget motion capture animation and predict the next animation frame, resulting in smoother and more lifelike character movements.
Prior to securing the Kakao Investment, NPCx raised over $540,000 through the crowdfunding platform Republic. This achievement not only demonstrates the potential of NPCx but also highlights the public’s keen interest in the integration of AI technologies within the gaming industry.
“We are honored to have Kakao Investment lead our seed round,” Madani said. “As a prominent startup investor in Asia, securing Kakao Investment’s investment during these challenging times in the capital markets is a tremendous vote of confidence in our team, technology, and product roadmap. This partnership marks a significant milestone for NPCx and will greatly expedite the development of our AI-powered products that will revolutionize the industry.” Origins NPCx was founded in 2020 by Madani (CEO), Michael Puscar (CTO), and Alberto Menache (CPO). Before starting the firm, Madani had experience founding a game development studio and later a motion capture and animation company that worked with major studios and publishers in the gaming, film, and XR industries.
Recognizing AI’s potential in revolutionizing animation pipelines for years, Madani sought AI specialists and crossed paths with Puscar in 2019. Leveraging Puscar’s AI and entrepreneurial expertise, they wanted to use AI and machine learning to streamline animation processes by automating tasks through neural networks, drawn from Puscar’s successful applications of the technology in other domains.
Soon after, the cofounders brought in Alberto Menache, a well-known pioneer in animation and motion capture pipelines with decades of experience. In fact, Menache authored a book on motion capture and has been a leading figure in the development of animation and motion capture pipelines for nearly 30 years.
“What I particularly value about this founding team is our extensive industry experience and expertise,” Madani said in an email to GamesBeat. “This allows us to create numerous AI-driven products that seamlessly integrate into existing animation pipelines, resulting in immediate and substantial time and cost savings for the same customers we’ve been working with throughout our professional careers.” In 2008, Madani began working as a business development director with a third party developer and publisher involved in 16 title releases (Sony, Microsoft and Nintendo). In 2010, he co-developed the top-selling game, Torchlight for Microsoft/Runic Games (Xbox 360 and PC/Mac). In 2014, Cameron co-founded Motion Burner, an award-winning motion capture and animation studio which has provided motion capture, rigging, modeling and animation services for 24 clients and 71 projects.
Puscar has been programming since the mid-1980’s, when at 11 years old he found a Commodore 64 under the Christmas tree. His work as a teenager was noticed by the US government, and he was recruited out of university to work for DARPA via Lockheed Martin with a top secret security clearance.
Puscar‘s expertise as a technologist is in the area of artificial intelligence and machine learning, including natural language processing, computer vision and the development of neural networks.
Menache is known as one of the fathers of Motion Capture. He has spent the last seven years solving “impossible” technical challenges for Lightstorm Entertainment and James Cameron on the Avatar films (2 through 5). Some of his credits include top film franchises such as Superman, Spider Man and Mission Impossible. He is the author of two definitive books on Motion Capture and a holder of nine patents in animation and motion capture innovations.
Revamping mocap NPCx primarily focuses on two innovative aspects in video gaming: character movement and character intelligence. For character movement, NPCx is creating a suite of products that vastly reduces the time and cost of creating and deploying motion capture and key-framed animations in video games, film, XR, and the metaverse. Their first product, TrackerX, launched in March 2023 utilizing neural networks and biomechanical models to substantially reduce the processing time of motion capture data, which up until now was done manually.
For character intelligence, NPCx is actually modeling humans – real-world players – and not creating god-like AIs or robotic LLM/GPT-driven procedural animation engines. They aim to virtually ‘clone’ players into games, XR, and the metaverse to such an extent that distinguishing between an NPC and a human player becomes nearly impossible.
Traditionally, motion capture performances relying on optical or sensor-based hardware systems require painstaking manual “cleaning” to prepare them for the final product. This process, done with a mouse and keyboard, corrects issues such as feet going through the floor or limbs penetrating other characters and objects.
TrackerX transforms this by combining biomechanical modeling and neural networks to automate this cleaning process. Currently, TrackerX speeds up manual cleaning by nearly 50 times, with ongoing neural network training for sustained improvements. This efficiency not only saves time and money for studios but also enables more extensive motion capture content creation within the same budget.
One big rival is Inworld AI, which recently raised $50 million at a $500 million valuation. Madani said, “Since we are developing lifelike NPCs for video games, XR, and the metaverse, Inworld AI would be a close competitor of ours. However, like many other similar competitors, they primarily use advanced Large Language Models (LLMs) and Generative Pre-Trained Transformers (GPTs) as their engine to bring characters to life, along with a procedural animation shell. We believe using LLMs and GPTs with generative animation systems is a “red ocean” strategy, meaning anybody can deploy a GPT engine at fairly low costs and train it within a generic animation wrapper, we believe this approach will create a very crowded field in a short amount of time, basically making it a commodity.” Other close competitors are attempting to create super NPCs, utilizing General Adversarial Networks (GANs) and Generative Adversarial Imitation Learning (GAIL), similar to OpenAI’s approach with DOTA 2 in 2019, although it’s important to note that OpenAI’s methods at the time could technically be consider cheating, according to an article by Vice.
“What distinguishes us from the rest of the pack, and where we hold a competitive advantage, is that while they either create a GPT system with an animation shell, or aim to generate super NPCs that are excessively lethal, our technology fine-tunes the NPCs to achieve a highly lifelike quality,” Madani said. “In fact, we can replicate various character play styles. Our secret lies in how we replicate these characters. We believe our methodology will result in more lifelike characters that exhibit human-like behaviors, avoiding the extremes of behaving solely like an animated GPT or, on the other end of the spectrum, being godlike.” The company has 22 employees and plans to hire an additional five by the end of 2023.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,095 | 2,023 |
"Meet Superframe, the AI startup that wants to be your copilot for revenue operations | VentureBeat"
|
"https://venturebeat.com/enterprise-analytics/meet-superframe-the-ai-startup-that-wants-to-be-your-copilot-for-revenue-operations"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meet Superframe, the AI startup that wants to be your copilot for revenue operations Share on Facebook Share on X Share on LinkedIn Image Credit: Superframe Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Superframe , an AI-powered software company aiming to help businesses optimize their go-to-market technology stacks, announced today that it has raised $5 million in seed funding from more than 40 angel investors, including data and AI experts, Salesforce consultants and general operating experts.
The round comes on the heels of Superframe’s launch of its first official product, an AI assistant for managing complex Salesforce implementations. The startup says its technology will save companies time and money by making Salesforce configuration changes fast, safe, reliable and easy.
Derek Steer, cofounder and CEO of Superframe, said that accuracy is going to be the company’s number one differentiator in the AI market.
“We want to fight the consumer frustration with a lack of accuracy. We want to build trust with our customers by giving them something they can’t get somewhere else,” he told VentureBeat in a recent interview.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Steer is no stranger to the data and AI world, as he previously sold his last company Mode , a business intelligence platform, to Thoughtspot for $200 million.
Simplify and optimize your go-to-market tools In the long term, Superframe aims to solve the pain points that many companies face when they implement go-to-market tools, such as Salesforce, Marketo and HubSpot. These tools are often complex, rigid and hard to configure, resulting in wasted time, money and resources. Superframe uses the latest language models from OpenAI ( ChatGPT ) to provide instant and accurate answers to questions about the current state of the system, and to propose and implement configuration changes based on the users’ description of what they want to do.
Steer also said that Superframe will not replace humans, but rather enable them to rely on their expertise and clear their backlogs.
“We want to help more people build more expertise,” he said. “And that’s something that customers are still going to want to rely on.” He added that Superframe will help customers map out their business processes and configure their systems without being held back by the complexity and rigidity of the tools.
Superframe is currently in beta testing with a select group of customers, and plans to launch publicly in early 2024. The first phase of Superframe, which is answering questions about the system, will be free for users. The company plans to use the seed funding for product development and hiring more engineers. The startup currently has four employees.
Building the next generation of go-to-market tools Superframe is one of the many startups that are using AI to simplify and optimize business operations. According to a recent Gartner report, the market for AI software will reach almost $134.8 billion by 2025.
The report also cites the increasing adoption of cloud-based services and applications as one of the key drivers for the AI market growth.
Superframe’s vision is to become the copilot for revenue operations teams, and to help them think more creatively about their go-to-market strategies.
“We believe that humans are capable of a lot,” said Steer. “And we are in a lot of cases bottlenecked by the tools that we use. We want to remove those bottlenecks in order to give people a greater ability to employ their creativity.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,096 | 2,023 |
"Confirm raises $6.2 million to bring generative AI and network analysis to performance reviews | VentureBeat"
|
"https://venturebeat.com/enterprise-analytics/confirm-raises-6-2-million-to-bring-generative-ai-and-network-analysis-to-performance-reviews"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Confirm raises $6.2 million to bring generative AI and network analysis to performance reviews Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Does anybody like hearing the phrase, “It’s performance review season, again”? In most organizations where this author has ever worked (and he has worked at many), neither managers nor employees particularly relished the process of giving and receiving performance reviews.
Still, many companies insist on them as a way of evaluating their talent and ensuring that high performers are rewarded with promotions or new opportunities, while low performers are identified and put on a path to improvement — or toward exiting the company. Yet, when administered by human beings — be they managers or peers — performance reviews can feel like personal attacks.
Confirm thinks it has a better way forward. The San Francisco-based startup announced it has raised $6.2 million in series A funding (and a total of $11.4 million) to transform the performance review process from the ground up, incorporating “organizational network analysis (ONA),” an approach the consulting giant Deloitte describes as “visualizing and analyzing formal and informal relationships in your organization,” as well as generative AI in the form of OpenAI’s GPT-4 , to deliver more fair, scientific and efficient performance reviews.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The round was spearheaded by Spero Ventures, and saw participation from SHRMLabs, Elefund, Gaingels and Black Angel Group as well as some of Confirm’s existing clients.
Fairness over favoritism According to Confirm, traditional performance review methods like continuous feedback and 360-degree assessments often muddy the waters instead of clearing them. Confirm is looking to change this by making performance reviews more straightforward and data-driven.
Confirm’s approach measures employee performance by examining how all employees in the company view one another. ONA operates on the principle that performance isn’t an isolated metric, but a network of relationships and influences within the workplace.
In fact, Confirm’s prior research published in Fast Company found that male employees received 25% higher ratings than female employees on average from managers, compared to the network ratings for both groups.
It also offers GPT-4-created drafts of performance reviews customized to each specific employee with input from their peers and managers; auto-generated employee survey results; and auto-calibrated ratings for employees that seek to minimize bias from any one particular manager or another.
A strong early track record Confirm was founded not too long ago in 2019 , but companies like Canada Goose, Niantic and Thoropass have already been reaping the benefits of its performance review platform.
Thoropass, for instance, managed to identify and keep all of its top performers during the wave of employee turnover known as “The Great Resignation,” in the late stages of the COVID-19 pandemic.
According to Joe Bast, VP of people and operations at Thoropass, ONA has been a game-changer, helping the company understand not just high and low performers, but also who the real influencers within the company are.
The company also earned a “World Changing Ideas Award” from Fast Company, and an HR Tech Award for Best Talent Intelligence Solution from Lighthouse Research and Advisory. It was chosen by SHRMLabs for its 2023 WorkplaceTech Accelerator program, a platform that helps promising startups grow.
What does the future hold for performance reviews? While every organization — from large to small, from established longstanding leaders to nimble new startups — has its own culture and politics, those shouldn’t really influence performance reviews, according to Confirm’s vision of the future.
David Murray, cofounder and president, wants to create “a world where employees are recognized and rewarded for their hard work and positive impact, not their ability to play office politics.” And, in a time where remote and hybrid teams are commonplace, there may not even be a real opportunity to evaluate someone face-to-face. Data-driven performance reviews matter more than ever, and Confirm aims to be the first name you think of when it comes time to do them — hopefully with a lot less dread than before.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,097 | 2,023 |
"Rockset to boost real-time database for AI era with $44M raise | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/rockset-to-boost-real-time-database-for-ai-era-with-44m-raise"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Rockset to boost real-time database for AI era with $44M raise Share on Facebook Share on X Share on LinkedIn Venkat Venkataramani, Rockset cofounder and CEO (L) and Dhruba Borthakur, cofounder and CTO of Rockset (R). Image credit: Rockset Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Database vendor Rockset is raising $44 million in new funding, as demand for its real-time indexing capabilities grows in the modern generative AI era.
The new fundraise follows the company’s series B round and brings total funding to date for the San Mateo, California-based company to $105 million.
Icon Ventures led the new round, with participation from Glynn Capital , Four Rivers , K5 Global , Sequoia and Greylock.
Over the course of 2023 in particular, Rockset has been growing its technology, which uses the open-source RocksDB persistent key-value store originally created at Meta (formerly Facebook) as a foundation. In March, Rockset rolled out a platform update designed to make its real-time indexing database dramatically faster. That update was followed in April by vector embedding support to help enable AI use cases.
“We’re getting pulled in more and more into AI applications that are getting built, and that is a very, very big platform shift that’s happening,” Venkat Venkataramani, cofounder and CEO of Rockset, told VentureBeat. “Fundamentally what we do is real-time indexing, and it turns out applications also need real-time indexing on vector embeddings.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Vector support is about more than just a new data type The use of vector embeddings, stored in some form of vector database , has grown in 2023 with the rise of generative AI.
Vectors, numerical representations of data, are used to help power large language models (LLMs). There are a number of purpose-built vector databases , including Pinecone and Milvus , which join a growing number of existing database technologies including DataStax , MongoDB and Neo4j that support vector embeddings.
Inside Rockset, vector embeddings are supported as a data type known as an “array of floats” in the existing database. Venkataramani emphasized, however, that simply supporting vectors as a data type isn’t what’s particularly interesting to him.
Rather, what is more interesting from his perspective is how Rockset has now built a real-time index technology for the vector embeddings. The index provides a logical key for enabling search on a given set of data. Having the index updated in real time is critical for certain production use cases requiring the most updated information possible.
As it turns out, the same basic approach that Rockset has built for real-time indexing of metadata also works well for vectors. Having a real-time index that can query both regular data and vectors is useful for modern AI applications, according to Venkataramani.
“Every AI application we were dealing with doesn’t only work with vectors. There are always all these other database metadata fields associated with every one of them — and the application needs to query on all of them,” he said.
How Rockset has built a real-time index for vector embeddings At the foundation of Rockset’s real-time database is the RocksDB data store, which the company has extended with the RocksDB Cloud technology.
Venkataramani explained that Rockset has developed a number of advanced techniques with RocksDB Cloud that help accelerate indexing for all data types. He noted that RocksDB Cloud now has an approximate nearest neighbor (ANN) indexing implementation, which is critical to enabling real-time search on vector data.
“Now, like any other index in Rockset, once you build a similarity ANN index for a vector embeddings column, it’s always up-to-date,” Venkataramani said. “It just automatically keeps itself up-to-date across inserts, updates and deletes.” Rockset also integrates a distributed SQL engine for fast data queries. Venkataramani noted that the company’s SQL engine is now able to execute real-time queries across all supported data types on the database.
“You can now literally, in a single SQL query, do a whole bunch of filters and joins and aggregations, and also use a vector embedding to do ranking relevance in a similarity search use case,” he said. “A single SQL query is extremely efficient and very, very fast, because the SQL engine is built to power applications and not analysts that are waiting for reports.” Looking forward, Venkataramani expects that there will be a lot more development of AI capabilities in Rockset. Among the future capabilities he’s looking forward to is support for GPU acceleration to further speed queries for LLMs and generative AI use cases.
“This industry is just getting started. This platform shift is not a fad; this is going to be a core part of every application,” he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,098 | 2,023 |
"Google reveals BigQuery innovations to transform working with data | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/google-reveals-bigquery-innovations-to-transform-working-with-data"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google reveals BigQuery innovations to transform working with data Share on Facebook Share on X Share on LinkedIn DAVOS, SWITZERLAND - JANUARY 25, 2022: A pedestrian passes a Google Cloud logo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google is pushing the bar on how teams work with their data.
Today at its annual Cloud Next conference, the internet giant announced major improvements for BigQuery — its fully managed, serverless data warehouse , including a unified experience aimed at interconnecting data and workloads. The company also shared how it plans to bring AI to the data stored in the platform, and how it plans to leverage its generative AI collaborator to boost the productivity of teams looking to consume insights from data.
“These innovations will help organizations harness the potential of data and AI to realize business value — from personalizing customer experiences, improving supply chain efficiency, and helping reduce operating costs, to helping drive incremental revenue,” Gerrit Kazmaier, VP and GM for data and analytics at Google, wrote in a blog post.
However, it must be noted that most of these capabilities are still being previewed and not generally available to customers.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Unified experience with BigQuery Studio Within BigQuery, which allows users to perform scalable analysis over petabytes of data, Google is adding a unified interface called BigQuery Studio. This offering will provide users with a single environment for data engineering, analytics and predictive analysis.
Until now, data teams had to work with different tools for different tasks, from managing data warehouses and data lakes to governance and machine learning (ML). Handling these tools took a lot of time and slowed down productivity. With BigQuery Studio, Google is enabling these teams to work with all of these tools in one place, to quickly discover, prepare and analyze their datasets and run ML workloads on them.
“BigQuery Studio provides data teams with a single interface for your data analytics in Google Cloud, including editing of SQL, Python, Spark and other languages, to easily run analytics at petabyte scale without any additional infrastructure management overhead,” a company spokesperson told VentureBeat. “This means a data worker doesn’t have to switch from one tool to another; it’s all in one place, making their lives easier and getting to results faster.” The offering is now available in preview and is already being tested by multiple enterprises including Shopify. Kazmaier also said Google is adding enhanced support for open-source formats like Hudi and Delta Lake within BigLake ; performance acceleration for Apache Iceberg; and cross-cloud materialized views and cross-cloud joins in BigQuery Omni to analyze and train on data without moving it.
(Editor Note: To help enterprise executives learn more about how to manage their data to prepare for generative AI applications, VentureBeat is hosting its Data Summit 2023 on November 15. The event will feature networking opportunities and sessions on topics such as data lakes, data fabrics , data governance and data ethics. Pre-registration for a 50% discount is open now.
) Even more for data teams Along with BigQuery Studio, Google is providing access to Vertex AI foundation models, including PaLM 2 , directly from BigQuery. This will allow data teams using BigQueryML (to create and run ML models on their datasets) to scale SQL statements against large language models (LLMs) and gain more insights, quickly and easily. The company also said it is adding new model inference capabilities and vector embeddings in BigQuery to help teams run LLMs at scale on unstructured datasets.
“Using new model inference in BigQuery, customers can run model inferences across formats like TensorFlow, ONNX and XGBoost,” Kazmaier noted. “In addition, new capabilities for real-time inference can identify patterns and automatically generate alerts.” Finally, the company said it is integrating its always-on generative AI-powered collaborator, Duet AI, into BigQuery, Looker and Dataplex. This will bring natural language interaction and automatic recommendations to these tools, boosting the productivity of teams and opening access to more users.
This integration is also in preview with no word on general availability yet.
Google Cloud Next runs through August 31.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,099 | 2,023 |
"Databricks bets big on activating data for marketers with Hightouch investment | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/databricks-bets-big-on-activating-data-for-marketers-with-hightouch-investment"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Databricks bets big on activating data for marketers with Hightouch investment Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
We’re living in a time where just about every company is overflowing with data, but when it comes to getting meaningful insights from it — that’s where organizations are often coming up short.
Enter Databricks , a San Francisco-based heavyweight in the data and AI space. They’re the team behind the lakehouse concept and they’re on a mission: To monetize data by making insights more accessible.
Today, Databricks has announced it’s putting its money where its mouth is. The company’s venture capital arm, Databricks Ventures , revealed in an exclusive VentureBeat report that it has made a strategic investment in promising San Francisco-based startup Hightouch , a software platform that helps businesses synchronize and activate all of their customer data.
Harnessing the power of vast data resources The strategic investment is a part of a recent $38 million funding announcement aimed squarely at a core challenge that has troubled businesses: How to effectively harness the power of their vast data resources. The combined offering of Databricks’ robust data platform and Hightouch’s efficient data extraction capabilities is set to provide businesses with the tools needed to fully exploit their data, particularly in the field of marketing.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Steve Sobel, who leads communications, media and entertainment at Databricks, explained the essence of the partnership in an interview with VentureBeat. “What we’re delivering with Hightouch is all around making data usable,” he said. “It’s about helping organizations through their enterprise data challenges and strategy.” Sobel’s comments underscore Databricks’ game plan to position itself as a vertical player in the sector, focusing on speaking the language of the customer and the industry. “We live in an era where every industry is moving toward direct-to-consumer,” he said. “Optimizing marketing and delivering a superior, personalized experience across any channel, anywhere, anytime is essential.” Syncing customer data across systems Hightouch cofounder and co-CEO Kashish Gupta offered a complementary perspective, explaining the “ match booster ” concept, a feature built into Hightouch that harmonizes first-party data with third-party datasets. “This approach allows businesses to reach their customers across a multitude of different channels,” said Gupta.
He further explained the convergence of data and marketing strategies, saying: “A data strategy and marketing strategy have actually become one in the current business landscape. Personalization based on factors such as zip code, last login time and myriad other activities now decisively influences these strategies.” Reflecting on the surge in digital data, Gupta pointed out: “Companies have more data than ever due to digital transformation. Extracting value out of that data by optimizing marketing using the data is truly where this partner strategy delivers.” (Editor note: To help enterprise executives learn more about how to manage their data to prepare for generative AI applications, VentureBeat is hosting its Data Summit 2023 on November 15. The event will feature networking opportunities and sessions on topics such as data lakes, data fabrics , data governance and data ethics. Pre-registration for a 50% discount is open now.
) Rapid growth by empowering marketing teams Founded in 2020 by Gupta, a former Bessemer Venture Partners investor, and former Segment engineers Tejas Manohar and Josh Curl, Hightouch helps customers leverage their data warehouse as a single source of truth for their business teams.
By using Hightouch’s reverse ETL (extract, transform and load) technology, customers can access, explore and sync data from their data warehouse to more than 200 SaaS tools such as Salesforce, HubSpot, Facebook and TikTok, without relying on engineering resources.
Hightouch claims to have hundreds of customers already across various verticals and industries, including the NBA, Grammarly, PetSmart, Imperfect Foods and Betterment. For context on its rapid growth, the company says it increased its revenue three times in the first half of 2022 alone and has grown its team from 40 employees in 2021 to 93 this year.
Fueling product development, go-to-market, new talent The new funding will be used to invest in product development, especially in the areas of customer understanding and out-of-the-box machine learning (ML) models, according to Gupta. Hightouch also plans to expand its go-to-market activities and hire more talent across different functions.
Gupta said that the company’s rapid growth has been driven by customer demand and product market fit. He said that Hightouch’s vision is to democratize data for all business teams by enabling them to use data from their data warehouse without code or engineers.
Hightouch is one of the pioneers of the reverse ETL category, which is rapidly growing as more businesses adopt data warehouses as their source of truth. According to Gartner, the number of enterprises implementing AI grew by 270% in the past four years and tripled in the past year, driving an increase in streaming data and analytics infrastructures with it. This creates a huge opportunity for platforms like Hightouch that can help businesses activate their data and apply AI to it.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,100 | 2,023 |
"Travelshift Secures $10 Million USD Capital Raise | VentureBeat"
|
"https://venturebeat.com/business/travelshift-secures-10-million-usd-capital-raise"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Travelshift Secures $10 Million USD Capital Raise Share on Facebook Share on X Share on LinkedIn REYKJAVIK, Iceland–(BUSINESS WIRE)–August 29, 2023– Travelshift, the leading online travel agency (OTA) in Iceland has raised $10 million USD of capital from existing shareholders, raising the total funding amount to $30 million USD.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20230829071571/en/ David Byron Stewart, the CEO of Travelshift and former Head of Global Private Equity at State Street Global Advisors (Photo: Business Wire) The financing represents a vote of confidence from Travelshift’s current investors in the company’s growth trajectory and future potential. According to Harshal Chaudhari, President and Chief Investment Officer at GE Investment Management Co., “We are proud to continue supporting Travelshift’s remarkable journey. The company’s unwavering commitment to innovation and customer satisfaction has been instrumental in its success, and we firmly believe in its potential to further disrupt the leisure travel industry.” The latest funding round follows Travelshift’s recent launch of Guide to Europe , a travel platform that solves the Connected Trip for the European leisure travel market. This innovative service, which is powered by AI, allows travelers to book everything they need in one checkout and manage their entire journey in one app. It also gives travelers access to thousands of itineraries that have been optimized with AI, with what is now the world’s largest selection of vacation packages in Europe.
“We are thrilled to receive the continued support and trust from our existing shareholders,” said David Stewart, CEO of Travelshift. “This financing round enables us to accelerate our growth initiatives and continue to build the next generation of travel through innovative use and application of AI to deliver personalized and seamless travel experiences.” Travelshift would like to express its sincere gratitude to its dedicated team, loyal customers, and steadfast shareholders for their continued support and belief in the company’s vision.
About Travelshift: Travelshift is an leading Icelandic online travel agency (OTA) that specializes in providing technology-driven travel solutions. With over a decade of experience in the industry, Travelshift is committed to revolutionizing how people travel, offering innovative services and exceptional customer experiences.
View source version on businesswire.com: https://www.businesswire.com/news/home/20230829071571/en/ For media inquiries or further information, please contact: David Stewart Chief Executive Officer Email: [email protected] Phone: +354 791 9394 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,101 | 2,023 |
"Reveal Acquires Logikcull and IPRO | VentureBeat"
|
"https://venturebeat.com/business/reveal-acquires-logikcull-and-ipro"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Reveal Acquires Logikcull and IPRO Share on Facebook Share on X Share on LinkedIn Double Acquisition Introduces First Unified AI-Powered eDiscovery Platform Capable of Tackling Any Legal Case at Scale CHICAGO–(BUSINESS WIRE)–August 29, 2023– Reveal, a global provider of a category-leading AI-powered eDiscovery platform, announced today it has acquired both Logikcull and IPRO, two other leading eDiscovery players. Together, the three companies offer the first end-to-end eDiscovery platform that addresses matters of all sizes and for all legal teams, from solo legal practitioners to the largest enterprise. The transactions, valued at more than $1 billion, were funded by Reveal’s majority shareholder and leading software investment firm, K1 Investment Management.
The combination integrates Logikcull and IPRO’s unique capabilities with Reveal’s proven AI prowess to create an all-in-one hub of eDiscovery tools for matters of any size and scope. From self-service offerings for smaller cases to enterprise-grade solutions for complex legal challenges, Reveal now stands as the go-to partner for automating the practice of law.
“The acquisitions of Logikcull and IPRO build on Reveal’s growth strategy of integrating the best and most useful technologies into one platform so customers have greater choice and control over their eDiscovery workflows,” said Wendell Jisa, Founder & CEO of Reveal. “By bringing together the strengths of all three companies, including Logikcull’s intuitive, easy-to-use functionality and IPRO’s global reach and information governance tools, Reveal is now able to serve the diverse needs of clients across the legal spectrum, from SMB to mid-market and enterprise.” In addition, the acquisitions will bring industry leading AI-powered eDiscovery solutions to an untapped global legal market. The company will now have employees stationed in more than two dozen countries, serving a customer base of over 4,000 clients.
“These two acquisitions are a continuation of our commitment to bring together the best technologies and people to propel the practice of law into a new era,” said Tarun Jain, Principal at K1 Investment Management. “With this combination, legal professionals will only have to look to one company to solve all their eDiscovery needs.” By integrating Logikcull and IPRO into Reveal’s ecosystem, Reveal now offers the most advanced automation capabilities in the industry. Logikcull’s seamless, self-service functionality enables users to efficiently handle simpler cases in-house, while Reveal’s scalable, feature-rich platform helps tackle the most complex litigation matters. The combined suite covers every stage of the eDiscovery process, from data collection and processing to review and analysis, so legal experts can increase efficiency, reduce costs, and focus on higher-value tasks that better serve their clients.
Reveal’s expanded suite of solutions is available immediately, offering day-one benefits including: Empowering legal professionals with choice : The acquisition of Logikcull allows Reveal to offer both down-market and enterprise customers multiple eDiscovery options to appropriately address the scale and complexity of any legal case, ensuring optimal efficiency and cost-effectiveness.
Bridging the justice gap: Reveal is the only legal technology company to democratize eDiscovery for all legal matters, offering any business – whether small or large – access to its leading AI-powered solutions.
Global expansion & access: The acquisition of IPRO enables Reveal to introduce its AI technology to a new client base across the globe.
Comprehensive AI-powered platform: Reveal has created a complete ecosystem for the legal industry with solutions ranging from information governance, early case assessment, legal hold, and collection to processing, document review, and trial presentation – all ultimately underpinned by one of the most powerful AI engines in legal technology.
Customized eDiscovery experience: Reveal’s expanded team of eDiscovery experts ensures that clients receive tailored solutions and guidance to navigate complex litigation challenges. Together with Logikcull and IPRO, Reveal continues to foster a culture of innovation and collaboration with its customers, pushing the boundaries of legal automation.
Am Law 100 firms, Fortune 500 corporations, legal service providers, government agencies and financial institutions in more than 20 countries across five continents work collaboratively with Reveal to uncover insights faster and solve even the most complex legal challenges with the most advanced AI in the industry. For more information about Reveal, visit www.revealdata.com.
About Reveal Reveal provides leading document review technology, underpinned by leading processing, visual analytics, and artificial intelligence, all seamlessly integrated into a single platform for eDiscovery and investigations. Our software combines technology and human guidance to transform structured and unstructured data into actionable insight. We help organizations, including law firms, corporations, government agencies, and intelligence services, uncover more useful information faster by providing a seamless user experience and patented AI technology that is embedded within every phase of the eDiscovery process.
View source version on businesswire.com: https://www.businesswire.com/news/home/20230829922415/en/ Media Contact: Liz Whelan 312.315.0160 [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,102 | 2,023 |
"Professor J Mocco to Join Protembis' Board of Directors | VentureBeat"
|
"https://venturebeat.com/business/professor-j-mocco-to-join-protembis-board-of-directors"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Professor J Mocco to Join Protembis’ Board of Directors Share on Facebook Share on X Share on LinkedIn Protembis announces the appointment of Professor J. Mocco to Board of Directors AACHEN, Germany–(BUSINESS WIRE)–August 29, 2023– Protembis GmbH (Protembis) a privately-held emerging cardiovascular medical device company, announced today the appointment of Professor J Mocco MD, MS, FAANS, FAHA of Icahn School of Medicine at Mount Sinai, NY, USA as an independent member of their Board of Directors.
Professor Mocco brings a wealth of clinical and academic experience as the Kalmon D. Post Professor and Senior Vice Chair of the Department of Neurological Surgery at Mount Sinai and is the immediate past President of the Society of Neurointerventional Surgery. Over his distinguished medical career spanning more than 20 years, he has authorship credits on over 600 publications. He is an editorial board member of Stroke since 2015 and has served or is serving as an associate editor of other journals including Neurosurgery , the Journal of Neurointerventional Surgery , and ISNR Stroke.
In his new role on the Board of Protembis, Professor Mocco will offer insights into the strategic direction of the company with his deep knowledge and clinical insights of endovascular stroke diagnosis and management. He will offer guidance on clinical strategies and new product development.
“I have been impressed by the Protembis team’s achievements in developing an elegant system to mitigate cerebral infarction risk during Transcatheter Aortic Valve Replacement” says Professor Mocco. He continues: “Their adaptive IDE clinical trial strategy is both rigorous and innovative. I am excited to offer my insights and guidance to the Board as this field evolves to treat future aortic stenosis patients who will have zero tolerance for brain injury as a potential procedural complication”.
Protembis has recently received FDA approval to conduct an IDE study aimed at demonstrating safety and efficacy of the ProtEmbo ® Cerebral Protection System (“ProtEmbo”) during transcatheter aortic valve replacement (“TAVR”). The ProtEmbo ® System is an intra-aortic filter device that protects the entire brain from embolic material liberated during the TAVR procedure. It is a low-profile system that shields all cerebral vessels, delivered through the left radial artery for optimal placement and stability. This is an ideal access site enabling physicians to avoid interference with TAVR equipment which is typically delivered through the femoral artery. The IDE study is designed as a multicenter randomized controlled trial in the USA and Europe.
“To have such an eminent expert with deep experience in the field of stroke joining our Board, is a strong indication of the Protembis solution for cerebral embolic protection’s impact in the future of TAVR” say Karl von Mangoldt and Conrad Rasmus Co-CEOs of Protembis. “I am delighted to welcome Professor Mocco to the Board of Protembis and to have his insights and strategic guidance as we generate confirmatory clinical data and further advance the field of cerebral embolic protection with the ProtEmbo System for complete cerebral protection” adds Dr Azin Parhizgar, Chairwoman of the Protembis Board of Directors.
About Protembis Protembis is a privately-held emerging medical device company that has developed the ProtEmbo ® Cerebral Protection System. The company strives to provide a simple and reliable solution to protect patients from brain injury during left-sided heart procedures, improving patient quality of life and reducing overall healthcare costs associated with brain injury during such procedures. The ProtEmbo ® System is currently undergoing clinical investigations.
View source version on businesswire.com: https://www.businesswire.com/news/home/20230829785853/en/ Protembis GmbH Conrad Rasmus & Karl von Mangoldt Co-CEOs & Co-Founders +49(0)241 9903 3622 management[at]protembis.com www.protembis.com VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,103 | 2,023 |
"Fianu Labs Secures $2 Million in Seed Funding from DataTribe to Automate Governance of Software Development | VentureBeat"
|
"https://venturebeat.com/business/fianu-labs-secures-2-million-in-seed-funding-from-datatribe-to-automate-governance-of-software-development"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Fianu Labs Secures $2 Million in Seed Funding from DataTribe to Automate Governance of Software Development Share on Facebook Share on X Share on LinkedIn Companies will soon be Liable for the Safety of Their Software for Consumers, Companies, and Governments.
FULTON, Md.–(BUSINESS WIRE)–August 30, 2023– Fianu Labs, the software governance automation solution, today secured a $2 million seed investment from DataTribe, a global cyber foundry that invests in and co-builds next-generation cybersecurity and data science companies.
For businesses in regulated industries, the weight of software regulation is onerous. Each software release requires hundreds of hours of manual evidence gathering, leading to longer release cycles that stifle innovation and cost tens of millions of dollars in lost productivity every year. There looks to be no relief in sight as regulators have signaled a renewed focus on software development practices in response to recent attacks on the software supply chain. Businesses are in dire need of a solution that streamlines their compliance and shortens release cycles.
Fianu Labs is pioneering the path for businesses to succeed in the era of software regulation with an intuitive approach to governance that instills confidence in each release. Fianu captures and maintains a continuous audit trail that tells the story of each code change, from commit to release and automates a once chaotic manual process with speed and clarity. At its core, Fianu bridges the gap between Security, Quality Assurance, Engineering, and Risk with a shared language and a unified front to regulators and auditors. The result is reduced risk, faster release cycles, and easier audits.
“Fianu is truly revolutionizing secure software development observability,” said Leo Scott, Chief Innovation Officer for DataTribe and a Fianu Board of Directors member. “Fianu gives Chief Technology Officers, Chief Security Officers, and Chief Information Officers confidence to deliver software at the speed they want and with the integrity required.” Over the last three years, the federal government has signaled increased scrutiny of software release patterns foreboding an era of crippling red tape and higher costs that could create significant challenges for companies across the regulatory landscape. Additionally, recent rulings have expanded the liability of software vendors and their executives. The message is clear: Companies that develop software will be held accountable for the security of their products. Fianu aims to reduce the weight of regulation for established companies while helping smaller and traditionally less-regulated companies transition to the new era of software development.
The company is providing visibility into the software development process in a provable way, enabling organizations to immutably attest to fundamental, sound, and secure software development best practices. Today, the demand is in regulated industries, but in the future, all companies producing custom software solutions will need to meet software governance requirements.
Fianu’s platform captures evidence across the DevSecOps toolchain mapped to internal policy during real-time, continuous audits against established risk controls and compliance frameworks. Each software release is accompanied by a Software Bill of Attestations (SBOA) designed to transmit immutable, audit-worthy evidence. By using Fianu, companies can replace opaque manual processes with streamlined, intuitive automation that makes software governance and compliance easy.
“There is no better team than DataTribe to help us realize our vision of a governance ecosystem that powers a modern approach to continuous delivery under rigorous regulatory requirements,” said Michael Edenzon, CEO and co-founder of Fianu Labs.
About DataTribe DataTribe is a startup foundry that invests in and co-builds world-class startups focused on generational leaps in cybersecurity and data science. Founded by leading investors, startup veterans, and alumni of the U.S. intelligence community, DataTribe commits capital, in-kind services, access to an unparalleled network, and decades of professional expertise to give their companies an unfair advantage. DataTribe is headquartered in the Washington-Baltimore metro area in Fulton, Maryland. For more information, visit datatribe.com.
About Fianu Labs Fianu Labs is a pioneer in the field of governance engineering and RegTech. Our focus is building software products to empower companies to deliver compliant software with maximum velocity. Fianu Labs is headquartered in Washington, D.C., and was founded by leaders in software governance, co-authors of Investments Unlimited, and software delivery experts from one of the nation’s largest banks. For more information, visit fianu.io.
View source version on businesswire.com: https://www.businesswire.com/news/home/20230830073713/en/ Josh Zecher [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,104 | 2,023 |
"Daversa Partners Ranks Among Top 20 Best Medium Workplaces 2023, According to Fortune Media and Great Place To Work® | VentureBeat"
|
"https://venturebeat.com/business/daversa-partners-ranks-among-top-20-best-medium-workplaces-2023-according-to-fortune-media-and-great-place-to-work"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Daversa Partners Ranks Among Top 20 Best Medium Workplaces 2023, According to Fortune Media and Great Place To Work® Share on Facebook Share on X Share on LinkedIn NEW YORK–(BUSINESS WIRE)–August 31, 2023– Great Place To Work® and Fortune magazine have selected Daversa Partners as one of 2023’s 100 Best Medium Workplaces.
Coming in at No. 19, this means that Daversa Partners has earned a spot as one of the best companies to work for in the country.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20230831931182/en/ Daversa Partners ranked No. 19 on this year’s 100 Best Medium Workplaces list. (Photo: Business Wire) To determine the Best Medium Workplaces list, Great Place To Work analyzed the survey responses of over 210,000 employees from Great Place To Work Certified™ companies with 100 to 999 U.S. employees.
The Best Medium Workplaces list is highly competitive. Great Place To Work, the global authority on workplace culture, determines its lists using its proprietary For All™ methodology to evaluate and certify thousands of organizations in America’s largest ongoing annual workforce study, based on over 1.3 million survey responses and data from companies representing more than 7.5 million employees this year alone.
Survey responses reflect a comprehensive picture of the workplace experience. Honorees were selected based on their ability to offer positive outcomes for employees regardless of job role, race, gender, sexual orientation, work status, or other demographic identifier.
“This year, we celebrate 30 years of Daversa Partners,” said Paul Daversa, Founder and CEO of Daversa Partners. “Over these three decades, we have had the privilege of being a part of the dynamic evolution of the tech industry. This journey has not just been about our growth as a firm, but about the remarkable founders, funders, and operators who have undeniably shaped the ecosystem with their strategic vision. We are proud and grateful to play a role in this evolution.” In 2022, Daversa Partners earned Great Place to Work™ Certification , with 95% of employees saying that “people care about each other here.” Daversa Partners was also awarded Best Workplaces for Women™ by Fortune and Great Place to Work® in 2022 – a testament to the firm’s commitment to the 64% of women who make up the company, with 56% at the leadership level. So far this year, Daversa Partners has secured a No.4 spot on Fortune’s 2023 Best Workplaces in New York list, was named a 2023 Best Workplace for Millennials , and recertified as a Great Place to Work™.
About Daversa Partners For three decades, Daversa Partners has built the leading management teams across the most disruptive companies of this generation, focused on serving the global founder and funder community around the world. Having worked alongside tech’s top VC and PE firms, Daversa Partners has had the privilege to build over 10,000 consumer and enterprise companies, all of which hold a shared vision: push the throttle on innovation. The company today is an important strategic partner that moves top executives into startup and growth oriented companies.
About the Fortune Best Medium Workplaces List Great Place To Work selected the Fortune Best Medium Workplaces List by surveying companies employing 7.5 million people in the U.S. with 1.3 million confidential responses received. Of those, more than 210,000 responses were received from employees at companies eligible for the Best Medium Workplaces list and this ranking is based on that feedback. Company scores are derived from 60 employee experience questions within the Great Place To Work Trust Index™ Survey.
Read the full methodology.
View source version on businesswire.com: https://www.businesswire.com/news/home/20230831931182/en/ Nicole Daversa [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,105 | 2,023 |
"BuildESG Adds Free Version of BuildRI Software, Accelerating the Path Toward a Single Source of Trust for Responsible Investment Integration in the Alternative Investment Sector | VentureBeat"
|
"https://venturebeat.com/business/buildesg-adds-free-version-of-buildri-software-accelerating-the-path-toward-a-single-source-of-trust-for-responsible-investment-integration-in-the-alternative-investment-sector"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release BuildESG Adds Free Version of BuildRI Software, Accelerating the Path Toward a Single Source of Trust for Responsible Investment Integration in the Alternative Investment Sector Share on Facebook Share on X Share on LinkedIn The BuildRI Platform for LPs, Lenders and Investment Managers Emerges as the Network for Assessing, Supporting and Sharing Responsible Investment and ESG Information NEW YORK–(BUSINESS WIRE)–August 29, 2023– After launching its award-winning ESG operating platform in 2022, BuildESG, a leader in the responsible investment reporting landscape, is delighted to announce the launch of a free version of its BuildRI software. Tailored to meet the needs of alternative investment managers in private equity, venture capital and private debt and their key stakeholders, BuildRI aims to make responsible investment integration more standardized, accessible and actionable.
“In a financial landscape where responsible investment practices are taking center stage, the need for a powerful, yet easy-to-use platform for the alternative investment sector is greater than ever,” said James Lindstrom, CEO of BuildESG. Mr. Lindstrom continued, “The free BuildRI platform provides the essential tools for an alternative investment manager to launch, manage and report on its responsible investment program consistent with leading frameworks and standards.” Supports Launch and Sharing of Responsible Investment Progress Across Limited Partner and Lender Network To reinforce its commitment to facilitating responsible investment across the alternative investment industry, BuildRI offers free portals not just for investment managers, but also for their portfolio companies, limited partners and lenders. These dedicated portals enable sharing of content, ratings and data, serving as a centralized hub for collaborative responsible investment.
Feature-Rich Software Focused on Responsible Investment To support a firm’s program launch, BuildRI offers a comprehensive set of features, including: Portfolio Data Collection and Analysis Tools : Simplify the process of gathering and analyzing data related to responsible investment practices across portfolio companies.
Action Lists : Keep your portfolio teams on track with task lists to build a solid foundation of responsible business practices.
UNPRI-Aligned Assessments : Take advantage of assessments that align with the PRI, a globally respected standard.
Award-Winning Benchmarking and Ratings : Evaluate your firm’s responsible investment performance against industry norms to identify areas for improvement.
Education and Training : Gain access to resources that enhance your understanding and implementation of responsible investment best practices.
Document Sharing and Retention : Centralize all relevant documents in one secure location for effortless compliance and transparency.
Helpful Templates : Accelerate your workflow with ready-made templates designed for responsible investment practices.
To get started with BuildRI, please contact [email protected] or visit www.buildesg.com.
About BuildESG BuildESG is a mission-driven organization providing a standardized Responsible Investment (RI) and Environmental, Social and Governance (ESG) platform and ratings system to investment managers, asset owners, limited partners and lenders. BuildESG’s platform product, BuildRI, is a single source of trust for investment managers and their limited partners, helping to assess, support and highlight managers and portfolio companies who prioritize responsible investment practices. BuildESG’s affiliates have provided strategic reporting services to the world’s leading organizations since 1999. To learn more, please visit www.buildesg.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20230829505166/en/ Information [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,106 | 2,023 |
"Backed by Dell's VC arm, IOTech sets sights on North America | VentureBeat"
|
"https://venturebeat.com/automation/dells-vc-arm-backs-industrial-edge-software-maker-iotechs-expansion-to-north-america"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dell’s VC arm backs industrial edge software maker IOTech’s expansion to North America Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
You may not have heard of IOTech — yet.
But that’s changing if Dell Technologies Capital has anything to say about it. The venture capital arm of the iconic PC brand has invested a new, undisclosed sum in IOTech, a U.K.-based firm that makes open-source software solutions for industrial edge devices.
With the money, IOTech is targeting a major expansion of its business in the U.S. and North America.
Think of all the sensors that the operator of a manufacturing plant might want to stick around their equipment to ensure it is running smoothly. Or the operator of a solar farm, who wants to know which cells are not performing well. Or the landlord of a building seeking to boost its environmental efficiency ratings. IOTech’s software is designed to work for all these types of use cases where physical capital can be monitored best by sensors at the “edge” feeding data about performance, state and conditions back into the cloud for decision-makers to review and act upon.
This is what gives IOTech the “IOT” of its name: the Internet of Things — in this case, industrial things. Real-world devices equipped with sensors and analyzed using intelligent software.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! British IOT invasion? The company already has a star-studded client list, including Accenture, EATON, Fluence Energy, Johnson Controls, King Steel and Schneider Electric.
While IOTech has to date built up most of its clientele in Europe and the Asia/Pacific region, the new funding round from Dell Technologies Capital — following its 2018 seed investment, according to Crunchbase — seeks to empower the company to make a bigger impact across the pond.
Existing stakeholders, including SPDG — the holding company of the Périer-D’Ieteren family — Northstar Ventures and the Scottish Investment Bank are all contributing more.
The fresh funds will empower the company to beef up its sales, marketing and pre-sales support. On top of that, IOTech has added field CTOs to its roster, further fortifying its expertise.
Notable names endorse IOtech To help navigate this bold new chapter, David C. King, a seasoned hand in the industrial IoT world and former CEO of FogHorn, is joining IOTech’s board of directors.
King is no stranger to steering companies toward success; he led FogHorn through three successful funding rounds before it was acquired by industrial automation giant Johnson Controls in 2022.
Gregg Adkin, managing director with Dell Technologies Capital, believes that IOTech’s technology is like a gold mine for industrial data.
As IOTech sets its sights on further U.S. growth, it’s also broadening its product portfolio beyond its current offering Edge Central, a control center that manages everything from connectivity to data processing of sensors and edge devices.
This platform is a spin-off from EdgeX, a leading open-source data integration platform.
IOTech says its platform is not only adaptable, but future-proof, safeguarding investments well beyond the hardware life cycle.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,107 | 2,023 |
"UserTesting expands platform with generative AI to scale human insights | VentureBeat"
|
"https://venturebeat.com/ai/usertesting-expands-platform-with-generative-ai-to-scale-human-insights"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages UserTesting expands platform with generative AI to scale human insights Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
UserTesting is kicking off its Human Insights Summit today with the launch of a new set of generative AI powered capabilities for its platform.
The new features that the company is simply calling UserTesting AI are intended to help customers scale up experience research efforts using AI.
The initial set of tools benefit from an integration with OpenAI to help users more easily generate summaries and build reports from research data. They extend existing AI capabilities that UserTesting has developed in-house in recent years to help organizations to better understand user behavior and sentiment for products and services.
Back in April, UserTesting launched its machine learning (ML)-powered friction detection capability for behavioral analytics.
The goal with the new UserTesting AI tools is to go beyond what the company has already been doing and tap into the power of gen AI technologies like OpenAI’s ChatGPT.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “UserTesting AI is a set of capabilities that are designed to be easily understood by our customers as being AI powered, that help a research, design, marketing or product team, essentially achieve more throughput,” Andy MacMillan, UserTesting CEO, told VentureBeat in an exclusive interview.
How UserTesting is using generative AI alongside its existing machine learning To date, much of the AI capabilities that UserTesting has provided to its users falls within the domain of ML.
MacMillan said that UserTesting has built its own ML models to take data from its platform, which enables teams to test how users interact with and experience a service or application. UserTesting records the user sessions and then uses its ML models to derive insights. The models have helped to identify things like sentiment, intent and where users get stuck in a workflow.
With the new UserTesting AI tools, the company isn’t just sending raw data to the gen AI model to process. MacMillan emphasized that UserTesting is using the gen AI alongside its existing models.
“We’re taking a lot of those ML outputs, where we’ve extracted what a researcher would find interesting, the friction, the insights, the suggestions, in addition to the transcripts, and we’re providing that and we’re creating tasks, summaries and research report summaries using large language models (LLMs),” MacMillan said.
Generative AI at UserTesting helps to *avoid* bias To date, user experience researchers have largely had to write reports and summaries on their own based on the data and insights from a UserTesting operation.
But now, for example, for a team that wants to test a new mobile app, the platform identifies and matches them with profiles of people ideal for testing. Users are then recorded as they test out the app prototype, with UserTesting ML models identifying interesting data points. The user is also recorded with video and audio and the entire sessions is transcribed.
“We run all those data streams through our ML models that help extract interesting moments,” said MacMillan.
The UserTesting platform then provides a results page that provides a list of interesting data points and highlights from the session. With UserTesting AI, researchers now get a full summary and report generated base on detailed findings. The report and summaries generated by AI will also have specific citations and references that can help researchers dig into specific data points.
While there is some concern with the broader use of gen AI and how it could have potential bias, MacMillan said that UserTesting AI could actually help to reduce potential bias.
“We think UserTesting AI can help our customers be more efficient,” said MacMillan. “I think it also helps researchers to avoid missing something, and it can help avoid biases, so you as the person doing the research might have some biases and AI can help you maybe see things you might not see.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,108 | 2,023 |
"Typeface teams with GrowthLoop and Google Cloud to launch unified 'GenAI Marketing Solution' | VentureBeat"
|
"https://venturebeat.com/ai/typeface-teams-with-growthloop-and-google-cloud-to-launch-unified-genai-marketing-solution"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Typeface teams with GrowthLoop and Google Cloud to launch unified ‘GenAI Marketing Solution’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
AI company Typeface has partnered with marketing player GrowthLoop and Google Cloud with the goal of transforming marketing for organizations of all sizes. The companies today announced a unified “GenAI Marketing Solution” that brings together the best of their respective platforms and gives marketers an end-to-end approach to create and launch campaigns across channels, at scale.
The offering allows teams to produce personalized content — from blogs to social media posts — for their campaigns, leveraging data from Google BigQuery , audience segmentation from GrowthLoop and Typeface’s generative AI smarts. According to the companies, it can cut the time taken to build and launch creative campaigns from several weeks down to days or even a few hours.
“Marketing leaders across the globe have shared with us that producing personalized content at scale across audience segments can be a significant challenge, often causing campaigns to take months and months to launch,” Vishal Sood, head of product at Typeface, said in a statement. “The GenAI Marketing Solution announced at Google Cloud Next offers marketers — for the first time ever — the ability to rapidly generate and deploy tailored, on-brand content across customer segments. With this new solution, marketing teams can dramatically accelerate campaign launches freeing up time for more creativity and collaboration.” How exactly does the GenAI Marketing Solution work? Currently available in private preview for Google BigQuery users, the GenAI Marketing Solution merges gen AI from Typeface into a streamlined workflow that covers every aspect of the campaign creation process, from extracting 360-degree customer profiles and defining audience segments to creating personalized, on-brand content, distributing it and measuring the results.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! First, users have to connect their BigQuery instance with GrowthLoop and use the latter’s visual or natural language builder to query data in the data warehouse and create audience segments to target. Once the segments are ready, they can export them to a marketing channel of choice, such as Google Ads, and use the Typeface integration with GrowthLoop to develop personalized creatives, ad copies, and campaign assets with text prompts.
As they develop the initial campaign assets, they can expand the effort by using Typeface to create an entire library of content for different marketing channels — such as personalized Instagram ads, SEO-optimized blog posts, and landing pages — that align with the GrowthLoop audience profile. This gives multiple variations of content, tailored to defined audience segments and brand voice, for different touchpoints.
Post-launch, teams can measure the results of the campaign directly within GrowthLoop, down to individual metrics such as revenue generated.
“Our collaboration results in an extraordinary solution, one that promises to reshape marketing workflows for businesses across the globe,” said Chris Sell, cofounder and co-CEO of GrowthLoop. “As we harness the transformative power of generative AI , we find ourselves at the cusp of a new chapter, empowering digital marketing teams with unparalleled efficiency and success-driving tools.” Generative AI is the catalyst for marketing While it remains unclear when the unified GenAI Marketing Solution will become generally available, there’s no denying that the move to rope in generative technologies is a welcome change for marketers who are facing increasing pressure to create compelling, personalized content to drive results in today’s fast-paced environment.
According to a Salesforce survey of more than 1,000 full-time marketers in the U.S., U.K. and Australia, gen AI is being seen as a “game-changer” that can save an employee about five hours of work every week. That’s more than a month every year, assuming eight-hour work days.
Among those using the technology at present, the most popular use case is basic content creation and writing marketing copy, with as many as 76% handling those tasks with LLM -driven apps like ChatGPT.
The next most popular use cases are inspiring creative thinking (71%), analyzing market data (63%) and generating image assets (62%).
Notably, LinkedIn’s Campaign Manager has already debuted a feature that allows users to generate introductory text and headlines for ads, using their data from the platform, while Meta has an AI Sandbox that lets advertisers create variations of basic copy for different audiences through text prompts.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,109 | 2,023 |
"Sprig uses AI to transform product surveys into conversational data | VentureBeat"
|
"https://venturebeat.com/ai/sprig-uses-ai-to-transform-product-surveys-into-conversational-data"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sprig uses AI to transform product surveys into conversational data Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Sprig , a five-year-old startup focused on creating smart, contextually aware in-app surveys for enterprises, earned a big vote of confidence last year, securing $30 million in funding from prominent venture capital firms including a16z and Accel.
Today, the company is announcing where some of those funds went: A new feature Sprig calls AI Analysis for Surveys, which, as the name suggests, uses generative AI large language models (LLMs) to intelligently comb through survey data and provide instantaneous insights to the company that conducted the survey.
Surveys that can answer you back To put it more bluntly: Sprig’s AI Analysis for Surveys transforms survey data into a conversational AI product.
With it, you as the survey owner can ask your survey results any conceivable question, and the AI will sort through them and attempt to respond with the most appropriate data, insights, takeaways or suggestions — and this includes qualitative survey data like open-ended text entries, not just multiple choice or quantitative answers.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “You can ask Sprig AI to answer any custom questions about your survey data, and it will analyze responses across all of your survey questions to find the answer,” Sprig CEO Ryan Glasgow, wrote in a blog post about the news.
The company also announced an expanded free plan, and new large enterprise customers including PayPal, Figma, Ramp, Peloton and Mixpanel.
“World-class product teams continue to choose our platform because they value their user experience and have found Sprig to be a mission-critical platform to differentiate their products in today’s competitive environment,” wrote Glasgow.
Building upon initial success Sprig first made waves in 2020 with the launch of its in-product survey platform and Open-Text AI Analysis feature, which automatically groups open-ended survey responses (those questions that ask you to write about your experience in a text box) into themes.
The feature was adopted quickly by leading enterprises including Dropbox, Loom, Coinbase, Robinhood and Square. To date, Sprig has analyzed feedback from more than 6 billion product visitors across hundreds of high-growth technology companies.
With the new AI Analysis for Surveys, Sprig takes it to the next level by analyzing entire survey datasets. Product teams can now: Review AI-generated survey summaries for top takeaways without manual analysis Ask custom questions about their data and receive AI-powered responses Explore new analysis questions suggested by Sprig’s AI based on the survey results to go deeper into their survey data and find less obvious trends. These findings can be used to improve and differentiate a product faster and in ways that would not have necessarily been even noticed by human review alone.
Addressing pain points to growth Glasgow wrote in an email to VentureBeat: “AI Analysis for Surveys solves a common pain point for product teams looking to deeply understand and optimize a specific part of their product experience, from understanding why users are churning out of a product to figuring out how to boost the conversion funnel.” In addition to rolling out AI Analysis for Surveys, Sprig is expanding its free plan to make its AI-powered product insights accessible to more teams.
The free plan now includes in-product surveys, session replays and Open-Text AI Analysis. Teams of all sizes can immediately start using Sprig and the new AI Analysis for Surveys feature set.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,110 | 2,023 |
"Runway unveils Creative Partners Program | VentureBeat"
|
"https://venturebeat.com/ai/runway-announces-creative-partners-program-giving-select-users-unlimited-plans-new-features"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Runway announces ‘Creative Partners Program’ giving select users unlimited plans, new features Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
RunwayML , the New York City-based startup that’s raised hundreds of millions in venture funding to develop generative AI video creation tools, has announced it will begin granting a select group of users early access to new features and and AI models.
The company announced its new Runway Creative Partners Program on X (formerly Twitter), writing, “This program provides a select group of artists and creators with exclusive access to new Runway tools and models, Unlimited plans, 1 million credits, early access to new features and more.” “It’s all about being in the right place at the right time,” Runway CEO and founder Cristóbal Valenzuela posted on X.
What the new Creative Partners Program offers On its website, Runway goes into more detail about what the Creative Partners Program will offer. Among the features are “direct access to the Runway team” and “priority access to Runway Studios grants.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Runway’s pricing structure requires using several proprietary credits to generate each video through its Gen-1 and Gen-2 multimodal video generation AI tools (users can generate videos from existing videos, text prompts, and imagery).
While the program does have a free tier with 125 non-renewable credits, the Standard Plan costs $12 per month per user for 625 credits that renew each month, while the Unlimited Plan tier costs $76 per user per month, but grants each user 2,250 credits/month of video generations in Gen-1 and Gen-2, and unlimited credits in a slower “relaxed” generation mode.
A major win for users The Unlimited plan with a million credits to start is a major win for potential users.
In addition, the company has continually updated and expanded its gen AI offerings, moving from the text-to-video Gen-1 release in February of this year, to a video-to-video Gen-1 mobile app in April, to the release of the text/video/image-to-video Gen-2 for desktop and mobile in June. Just this month, the company released a new “ Watch ” tab to show off the video creations of its users, similar to YouTube.
Therefore, it stands to reason that the company will have more new features and services soon — and it is promising to give them to those accepted into its Creative Partners Program first.
How do you get into Runway’s Creative Partners Program? The company is for now open to considering seemingly any and all applicants, asking them to fill out a form on its website with fields for the user’s name, pronouns, portfolio and social media accounts.
When it comes to who is eligible, a Runway spokesperson emailed VentureBeat the following statement: “Anyone from anywhere in the world is welcome to apply. We’re looking for creators who are using AI tools and techniques to push the boundaries of creativity. It’s not required to have a paid Runway account to be admitted.
“Applications will be accepted on a rolling basis over the coming weeks and months, and creators who are accepted will announce their involvement to their own communities at their discretion.” Following in the footsteps of other video creation platforms The obvious comparison to Runway is increasingly YouTube, although the latter is of course not limited to, nor does it presently offer, gen AI video creation tools and videos.
But YouTube paved the way in building a robust ecosystem of amateur (and pro) video creators, which it sought to nurture and continues to support through its YouTube Partner Program (YPP) , which allows creators to monetize their videos through subscriptions, ecommerce affiliate links and product mentions, advertising, digital stickers and more.
YouTube itself funded several higher-production TV shows and films through its YouTube Originals brand, including the Karate Kid spinoff Cobra Kai , although YouTube ultimately canceled its scripted development arm and that series was later canceled and picked up by rival Netflix.
It’s unclear just how much of YouTube’s playbook Runway may seek to emulate, but launching a Creative Partners Program is a similar starting move for creating a thriving creator ecosystem, and seems like the necessary first step in making the dominant AI video platform.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,111 | 2,023 |
"Pirros, a startup that applies AI to streamline drawing sets for buildings and infrastructure, lands $2 million seed round | VentureBeat"
|
"https://venturebeat.com/ai/pirros-a-startup-that-applies-ai-to-streamline-drawing-sets-for-buildings-and-infrastructure-lands-2-million-seed-round"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Pirros, a startup that applies AI to streamline drawing sets for buildings and infrastructure, lands $2 million seed round Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Pirros , a technology platform that helps architecture and engineering firms manage their drawing sets more efficiently, announced today that it has raised a $2 million seed round from a group of investors and advisors with deep industry connections.
Notable contributors to the funding round include angel investors Carl Bass, former chief executive of Autodesk; Joseph Walla of HelloSign; and Ryan Sutton-Gee of the construction software firm PlanGrid. Venture capital firms including YCombinator, FundersClub and Twenty Two Ventures also participated in the seed round.
A centralized, searchable platform Pirros is a tool created to streamline detail management for architecture and engineering firms. It automatically categorizes and catalogs the primary deliverable of design professionals: The many thousands of drawing sets that firms create each year for buildings and infrastructure.
Most firms currently face an extremely inefficient paradigm of creating, using and effectively discarding design details — not because they are no longer useful, but because they are stored on on-premises servers with little to no ability to rediscover and reuse them. This means architects and engineers have to re-create drawings over and over for each project, which has the further effect of stripping them of the quality control process they went through in the course of initial creation.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With Pirros, architects and engineers can spend more time actually designing buildings instead of documenting them. This is achieved by automatic information aggregation and storage, so that all of a company’s outputs are stored and managed in a centralized, searchable platform for easy future re-use.
Architects and engineers can quickly get up to speed Pirros CEO and cofounder Ari Baranian said in an interview with VentureBeat: “Every company has tried to build out a small catalog, so about a couple of hundred details, and these will be the most common details that they’ve used … There’s just never been the tools to expand the catalog beyond 100, 200, or even 500 details.” He further emphasized: “Now, our average company has over 10,000 [searchable] details on the platform. So with that ability, any new architect, any new engineer that joins the firm, quickly gets up to speed on the different standards of that office.” The proof is in the rapid adoption of the tool among some of the industry’s biggest players. The software is already being used by more than 30 firms including large architecture companies like KPFF Engineers and RAMSA.
Using AI to make building design easier and faster Pirros leverages the metadata from the building information models (BIMs) that firms use to create their drawing sets. It extracts and indexes this data into a searchable and reusable catalog of 2D assets. It also uses clustering algorithms to group similar details together so that users can see different versions of the same condition and choose the best one.
The platform integrates seamlessly with any firm’s existing tools or workflows. The onboarding process is simple: Firms just need to identify the models they want to include in their Pirros catalog, and Pirros does the rest of the work with its integration pipeline.
Cutting drafting work in half for architecture firms The company has received positive feedback from its customers, especially from the youngest architects and engineers who use its platform.
“Seeing the amount of traction that we’ve gotten with the youngest architects and engineers was surprising to us, but also super motivating to see that we’re actually making an impact there.” said Baranian.
Pirros plans to use the $2 million seed funding to grow its team, improve its product and expand its market. One of the upcoming features that Baranian is excited about is using AI to identify the best versions of every detail automatically and provide users suggestions and recommendations.
Pirros is a pioneer in the field of architectural detail management, which has been largely overlooked by other technology platforms. By solving this specific problem, Pirros aims to transform the way buildings are designed and documented.
As Baranian put it: “We built our product exactly as we would have wanted to use it.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
3,112 | 2,023 |
"OpenAI wants teachers to use ChatGPT for education | VentureBeat"
|
"https://venturebeat.com/ai/openai-wants-teachers-to-use-chatgpt-for-education"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI wants teachers to use ChatGPT for education Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
It’s not only programming , journalism and content moderation that OpenAI is seeking to revolutionize with the use of its landmark large language models (LLMs) GPT-3, GPT-3.5 and GPT-4.
Today, the company published a new blog post titled “ Teaching with AI ” that outlines examples of six educators from various countries, mostly at the university level though one teaches high school, using ChatGPT in their classrooms.
“We’re sharing a few stories of how educators are using ChatGPT to accelerate student learning and some prompts to help educators get started with the tool,” the company writes.
How educators are already using ChatGPT in their classrooms The examples range from one educator using ChatGPT as a kind of educational role player, taking on the part of a debate rival or recruiter and engaging students in a dialog; to another teacher using ChatGPT for translation assistance for English-as-a-second-language students; to yet another having their students fact-check the information it generates.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company also includes sample prompts developed by AI influencer and University of Pennsylvania Wharton School professor Ethan Mollick and his wife and fellow professor Lilach Mollick that assist teachers with lesson planning and even turn the default ChatGPT into an “AI tutor” for students.
Asked by this VentureBeat author on X (formerly Twitter) if OpenAI paid Ethan Mollick for use of his and his wife’s prompts, he responded in the negative: “No. I have never taken any money or compensation in any way from OpenAI, including token credits,” adding “In this case, they used prompts and material we have already published.” No. I have never taken any money or compensation in any way from OpenAI, including token credits.
In this case, they used prompts and material we have already published.
Lessons learned? Of course, the issue of generative AI in the classroom — like with many topics related to the technology — has been fraught with controversy, especially with regards to students using it as a means of cutting corners or avoiding doing their own coursework, such as writing essays.
In fact, several schools, districts, and departments of education around the globe have already banned ChatGPT and added it to their internet network blocklists, although the New York City Public School system did an about-face in May and moved to allow teachers to use ChatGPT as they see fit.
OpenAI made headlines earlier this year by releasing an “ AI Text Classifier ” that was designed to allow anyone, including educators, to copy and paste in text and determine whether or not it was written by AI, but then ended up discontinuing it last month due to its “low rate of accuracy.” Limitations acknowledged Today, OpenAI elaborated on the issues with the Text Classifier in a new Educator FAQ (frequently asked questions), which is far more robust and arguably even more helpful for schools than its promotional blog post.
Answering the question of “How can educators respond to students presenting AI-generated content as their own?,” OpenAI answers to say: “While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content,” and “When we at OpenAI tried to train an AI-generated content detector, we found that it labeled human-written text like Shakespeare and the Declaration of Independence as AI-generated.” In addition, OpenAI admits: “There were also indications that it could disproportionately impact students who had learned or were learning English as a second language and students whose writing was particularly formulaic or concise.” Plus, as the company points out, “even if these tools could accurately identify AI-generated content (which they cannot yet), students can make small edits to evade detection.” ‘Human in the loop’ Instead, OpenAI notes that some teachers have begun asking students to show their conversations with ChatGPT as a form of displaying their critical thinking skills.
Furthermore, while OpenAI says that there is research supporting the fact that “ChatGPT can be a helpful tool, alongside teachers, for providing students with feedback,” it does not link to this specific research, and says “it is inadvisable and against our Usage Policies to rely on models for assessment decision purposes without a ‘human in the loop.'” In other words — the idea of a teacher handing over most of their duties to ChatGPT is not in the cards yet, or likely the foreseeable future, and same with students and their coursework.
Still, the company clearly wants to promote the idea that ChatGPT can be a useful new tool for both sides of the educational equation, teachers and students alike, joining the familiar classroom sights of pencils, notebooks, computers, and globes.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
Subsets and Splits
Wired Articles Filtered
Retrieves up to 100 entries from the train dataset where the URL contains 'wired' but the text does not contain 'Menu', providing basic filtering of the data.