id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
3,113
2,023
"OpenAI asks to dismiss most of Sarah Silverman's, authors’ case | VentureBeat"
"https://venturebeat.com/ai/openai-seeks-to-dismiss-majority-of-sarah-silvermans-and-authors-claims-in-chatgpt-lawsuits"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI seeks to dismiss majority of Sarah Silverman’s and authors’ claims in ChatGPT lawsuits Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI , the organization behind ChatGPT and its underlying large language models (LLMs) GPT-3.5 and GPT-4, has filed motions to dismiss in two copyright lawsuits levied against the company for using copyrighted materials in AI model training data. The plaintiffs include a pair of U.S. authors and a second group including comedian and actor Sarah Silverman. In the filings submitted to the U.S. District Court for the Northern District of California on Monday, OpenAI requested the dismissal of five out of the six counts lodged in the lawsuits. The company defended the transformative nature of its LLM technology, underscoring the need to balance copyright protection and technological advancement. OpenAI also said that it planned to contest the remaining count of direct copyright infringement in court as a matter of law. The motions addressed the claims asserted in the copyright lawsuits and aimed to elucidate the case’s merits. OpenAI underscored the value and potential of AI, particularly ChatGPT, in enhancing productivity, aiding in coding and simplifying daily tasks. The company likened ChatGPT’s impact to a significant intellectual revolution, drawing parallels with the invention of the printing press. “You can start to see the story that they’re going to tell here, which is that copyright has limitations to it. It doesn’t extend to facts and ideas,” said Gregory Leighton, a privacy law specialist at law firm Polsinelli. “Even if a work is copyright[ed] and an LLM [is] processing it or then producing a summary of it back or something like that, that’s not a derivative work on its face.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! OpenAI based its defense on the fundamental facts of the LLM technology: It is a type of neural network trained on extensive text data to comprehend human language effectively, and it enables users to input text prompts and receive corresponding generated content. Per the filings, OpenAI claims its products merge LLMs with parameters ensuring the accuracy, relevance, safety and utility of the produced outputs. Balancing copyright law and technological innovation The plaintiffs argued that ChatGPT was trained without permission using their copyrighted works. In response, OpenAI contended that this perspective overlooks the broader implications of copyright law, including fair use exceptions. The company asserted that fair use can accommodate transformative innovations like LLMs and is aligned with the constitutional intent of copyright law to foster scientific and artistic progress. “It’s true substantively, but there’s an interesting sleight of hand going on here,” said Leighton. “You shouldn’t be talking about fair use in a motion to dismiss, because fair use is an affirmative defense. It’s actually something that they, as the defendant, have to affirmatively plead and prove up,” he said. OpenAI’s motion cited court cases where the fair use doctrine protected innovative uses of copyrighted materials. It called for the dismissal of secondary claims from the plaintiffs, including vicarious copyright infringement, violations of the Digital Millennium Copyright Act (DMCA) , violations of California’s Unfair Competition Law (UCL), negligence and unjust enrichment. OpenAI challenged the legal validity of these claims and argued for their removal based on flawed legal reasoning. “These were probably always the ancillary and companion claims, and the main meal here is copyright infringement,” said Leighton. Vicarious copyright infringement is applied in cases where a party is in indirect benefit of copyright infringement, committed by another person. OpenAI stated that the plaintiff’s allegations of direct infringement were not valid as a matter of law, nor did it have any “right and ability to supervise” and it did not end up having any direct financial interest. OpenAI’s arguments in favor of dismissal OpenAI offered refuting evidence to the plaintiffs’ various theories of why it violated vicarious infringement rules, the DMCA, and UCL, claims including: Every ChatGPT output is an infringing derivative work of their copyrighted books; and LLM training removes the “copyright management information” from the specified works. OpenAI contends that the plaintiffs don’t have enough evidence to claim that LLMs produce derivative works, and that if those standards are applied on a wider scale, photographers would be able to sue painters who reference their material. The evidence offered by the plaintiffs about copyright management information was contradictory and failed to show how it was purposely removed, OpenAI said. The company also found deficiencies in the negligence and unjust enrichment claims, saying that there was no grounds for negligence as OpenAI or its users would be engaging in intentional acts and OpenAI did not owe the plaintiffs a duty of care. Nor, according to the filings, was there any evidence to support the claim that OpenAI held on to profits or benefits from the infringed material. Finally, OpenAI argued that both the negligence and unjust enrichment state law claims are preempted by federal copyright law. “It might take a month or six weeks, but the plaintiffs will file a response where they’ll have to say why they think these claims should stay in,” said Leighton. “That actually might be quite interesting just to get their take of where they’re going with this.” OpenAI’s dismissal request and the path forward OpenAI’s dismissal motion is founded on ChatGPT’s transformative nature, fair use principles and perceived legal shortcomings in the plaintiffs’ ancillary claims. The motions provided insight into OpenAI’s overall defense of its ongoing operations as it navigates the complex intersection of copyright law and AI technology advancement. While Leighton believes that this particular motion to dismiss may not have huge immediate effects, the stakes in the overall case remain high. In determining the extent to which large language models can be trained on copyrighted works without infringing copyright, the outcome of the lawsuits could have major implications for AI use cases, especially if it is determined that ingesting copyrighted works always infringes copyright. “We’re getting the first real insight into where this is really going to go,” said Leighton. “They’re introducing these things to the judge, not because it really has anything to do with the motion to dismiss itself and what they’re trying to accomplish procedurally, but it’s the intro thematically to [OpenAI’s] side of the case here.” As the lawsuits unfold, this legal conflict will likely define the future of copyright law and technological progress. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,114
2,023
"At the US Open, IBM serves up AI-generated tennis commentary and draw analysis | VentureBeat"
"https://venturebeat.com/ai/ibm-serves-up-ai-generated-tennis-commentary-and-draw-analysis-at-the-us-open"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages At the US Open, IBM serves up AI-generated tennis commentary and draw analysis Share on Facebook Share on X Share on LinkedIn Photo by Manuela Davies/USTA Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Back in May, IBM doubled down on its AI efforts with the announcement at the company’s annual Think conference of its new Watsonx product platform, which provides a foundational model library that can be used to fine-tune pretrained models for enterprise application development. Now, the company is serving up what it hopes is a generative AI ace: For the first time, it is offering AI-generated audio tennis highlights for all matches during the two-week-long U.S. Open Tennis Championships , as well as AI-powered analysis to determine the projected difficulty of player draws and potential opponents. More than 700,000 people head to Flushing Meadows, New York, each year to watch the best tennis players in the world compete, while more than 10 million tennis fans around the world follow the tournament through the U.S. Open app and website. And, for three decades, IBM has been working with the United States Tennis Association on creating digital experiences for tennis fans. IBM’s data operations bunker The effort begins in the basement-level IBM data operations center at Arthur Ashe Stadium, where millions of data points are captured and analyzed. There are typically 56 data points collected for every single point of a tennis match. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! IBM is using gen AI models built, trained and deployed with Watsonx, and operating across a hybrid cloud infrastructure from Red Hat OpenShift, to generate detailed audio narration and captions to accompany U.S. Open highlight videos at unprecedented scale — for every match in the singles draw, across all 17 courts. In addition, IBM debuted its Watsonx-powered AI Draw Analysis that uses both structured and unstructured data to project the level of advantage or disadvantage of all players in the singles draw. Each player receives an IBM AI Draw Analysis at the start of the tournament, which will be updated daily as the tournament progresses and players are eliminated. Every draw is ranked, allowing fans to click into individual matches and see the projected difficulty of their draw and potential opponents. Previously, the USTA couldn’t cover highlights of all matches Kirsten Corio, chief commercial officer at the USTA, told VentureBeat that with 128 men and 120 women playing singles in the U.S. Open — as well as doubles, juniors and wheelchair tennis matches — the organization couldn’t cover the highlights of most of the matches throughout the tournament. “Depending on how many writers you have, you can only do a few matches at a time,” she said. “The other matches would just have stats and scores, but no commentary, so those stories are untold.” So the USTA and IBM began to think about how to scale tournament coverage by combining stats and stories with gen AI. “How could we use the data and technology to actually write highlights that would be reliable and accurate enough?” said Corio. Corio added that the USTA dreams of including AI-generated highlights in different languages in the future. “We would love to do that in Spanish, to scale more engagement,” she said. “That’s the natural next step.” Questions for IBM and USTA about AI hallucinations, data control While the USTA has been partnering with IBM on its technology efforts for decades, when it comes to today’s advanced AI applications, Corio pointed out that being able to control the data and the ecosystem is key. The USTA uses its own curated, official data, “but there are plenty out there who peddle in unofficial data,” she explained. “We’re not yet sure what the downstream effects of that could be, so we’re actually putting together a few different task forces across the company post-U.S. Open, to dig into how can it benefit us? How can we protect against any potential conflict?” A more of-the-moment concern is AI hallucinations — but in a presentation in the IBM Data Center beneath Arthur Ashe Stadium, an IBM spokesperson told VentureBeat that the company is doing human-in-the-loop quality checks on its AI Commentary. “We’re hoping over time we can reduce the need for human QA, but we do check each highlight clip, to make sure that the commentary is solid,” the spokesman said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,115
2,023
"IBM and Salesforce team up to bring AI tools to their shared clients | VentureBeat"
"https://venturebeat.com/ai/ibm-and-salesforce-team-up-to-bring-ai-tools-to-their-shared-clients"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM and Salesforce team up to bring AI tools to their shared clients Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Big Blue is teaming with an even Bigger Blue to deliver AI solutions to clients. Today, IBM and Salesforce announced they are joining forces to bring Salesforce AI solutions (Sales GPT, Service GPT, Salesforce Einstein, Slack GPT and Marketing GPT) to customers who do business with both companies. Obviously, what Salesforce brings to the table is its popular and powerful customer relationship management (CRM) software, in addition to the aforementioned AI apps and tools. 160,000 consultants! What IBM offers through the partnership is “industry expertise and innovative delivery models” through its IBM Consulting arm of 160,000 human consultants, the company said in a press release. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Specifically, this includes “IBM Garage … an operating model for business transformation,” which will help the combined clients get their Salesforce AI integrations up and running. IBM notes that the shared customers may also wish to adopt its Watsonx enterprise AI platform for finding and fine-tuning enterprise grade AI models. WatsonX can further help customers find “data locked in backend systems” that they can better access and leverage through their shiny new Salesforce and open-source AI models. That’s classified Further, customers should consider using IBM’s Data Classifier, an “AI-powered application trained on industry-specific data models,” to help them map all their internal data to make it useful and accessible to the AI tools and apps, IBM says. “Companies are embarking on a transformative journey fueled by generative AI ,” Steve Corfield, Salesforce EVP and GM of global alliances and channels said in a press release. “Salesforce partners like IBM Consulting play an important role in helping businesses use Salesforce’s AI, data and CRM technologies to connect with their customers on a new level. Bringing Salesforce and IBM innovations together will help transform the way companies deliver personalized, engaging experiences.” IBM is practicing what it preaches. The original Big Blue used Salesforce and its own watsonx to overhaul its customer service and sales processes — now it’s hoping to do the same for many others around the globe. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,116
2,023
"Google shows off what's next for Vertex AI, foundation models | VentureBeat"
"https://venturebeat.com/ai/google-shows-off-whats-next-for-vertex-ai-foundation-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google shows off what’s next for Vertex AI, foundation models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI is a core focus for Google , and at its Google Next event today, the company announced a series of updates across its portfolio that benefit from the power of generative AI. Front and center are enhancements and new capabilities across Google’s Vertex AI platform, including both developer tooling and foundation models. Google’s PaLM 2 large language model (LLM), first announced at the Google I/O conference in May, is getting an incremental boost with more language support and longer token length. The Codey code generation LLM and the Imagen image generation LLMs are also getting updates to improve performance and quality. Vertex AI is being expanded with new extensions to make it easier for developers to connect to data sources. Google is making both the Vertex AI Search and Vertex AI Conversation services generally available, providing search and chatbot capabilities to Google’s enterprise users. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Rounding out Google’s Vertex AI update is the Colab Enterprise service, which provides compliance and security capabilities to the data science notebook platform. PaLM opens up with larger token length In a press briefing ahead of the Google Next conference, June Yang, VP, cloud AI and industry solutions at Google, detailed some of the Vertex AI-related updates. “AI is undergoing a major shift with the rise of foundation models. Now you can leverage these foundation models for a variety of use cases without ML [machine learning] expertise,” she said. “This is really a game-changer for AI, especially for the enterprises.” Google builds its own foundation models and also provides support for a number of third-party models that can run on Google Cloud. Google’s flagship model is PaLM 2, available in a number of configurations. One is the text model, which is being enhanced with a larger input token length context window, something Yang said has been a “key request” from customers. Expanded from 4,000 to 32,000 tokens, PaLM 2’s context window will enable text users to process longer-form documents than before. PaLM 2 is also being expanded with more language support, now with the general availability of 38 languages including Arabic, Chinese, Japanese, German and Spanish. Code development and image generation get a boost The Codey text-to-code LLM is another foundation model that has received an update, one which, according to Google, provides up to a 25% quality improvement for code generation. “Leveraging our Codey foundation model, partners like GitLab are helping developers to stay in the flow by predicting and completing lines of code, generating test cases, explaining code and many more use cases,” Yang said. The Imagen text-to-image LLM is being upgraded as well. The big new feature, one Yang referred to as one of the coolest she’s seen, is something Google calls “style tuning.” “Our customers can now create images aligned to their specific brand guidelines or other creative needs with as few as 10 reference images,” she said. For example, Yang said that with style tuning an Imagen user can apply corporate guidelines to either a newly generated image or an existing one, and the resulting Imagen image will have the appropriate style built into it. Llama 2 joins Google’s foundation model lineup While PaLM 2 is Google’s flagship foundation model, the company is also providing third-party LLM access on Google Cloud. The ability to support multiple foundation models is increasingly becoming table stakes for cloud providers. Amazon, for example, supports multiple third-party models with its Bedrock service. Among the new third-party models that Google now supports is Meta’s Llama 2 , which was just released in July. Yang said that Google will enable users to use reinforcement learning with human feedback (RLHF) so organizations can further train Llama 2 on their own enterprise data to get more relevant and precise results. Google will also be supporting Anthropic’s Claude 2 model and has pledged to support TII’s Falcon. Extending Vertex AI Foundation models on their own are interesting, but they get a whole lot more interesting when enterprises can connect them to their own data to take action. That’s where the new Vertex AI Extensions tools fit in. Yang said that developers can use the Extensions to build powerful generative AI applications like digital assistants, customized search engines and automated workflows. “Vertex AI Extensions are a set of fully managed developer tools, which connect models via API to real-world data and enable models to perform real-world actions,” Yang said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,117
2,023
"Google brings new AI to AlloyDB and database migration service | VentureBeat"
"https://venturebeat.com/ai/google-brings-new-ai-to-alloydb-and-database-migration-service"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google brings new AI to AlloyDB and database migration service Share on Facebook Share on X Share on LinkedIn Google Cloud offices in Sunnyvale, California Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At the Google Cloud Next conference today, Google will announce a series of AI-powered updates across its portfolio, including its database platforms. Among the AI focused database announcements is the introduction of AlloyDB AI, which brings vector embeddings to the PostgreSQL compatible cloud database. The new vector embeddings will also be part of the AlloyDB Omni service which is entering public preview today, enabling users to run AlloyDB outside of the Google Cloud. AlloyDB was first announced as a preview by Google in May 2022, providing both transactional and analytics capabilities with a PostgreSQL based database. The AlloyDB Omni platform was initially detailed by Google in March 2023, opening up the database to wider deployment options. Easier queries with natural language AI will also help to enable database migrations from the Oracle database to AlloyDB, with the new Duet AI capability in the Google Database Migration Service. Beyond AlloyDB, Google is also introducing the new Cloud Spanner Data Boost capability that will enable data sorted in the Cloud Spanner database to be more easily queried with Google BigQuery. Duet AI is also making its way into Cloud Spanner to help enable natural language queries for data operations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We really see databases as helping to bridge the gap between large language models (LLMs) and AI apps,” Andi Gutmans, VP and GM for databases at Google, told VentureBeat. “Customers, especially enterprise customers, really like the ChatGPT experience, but ultimately they can’t have something that is too creative, and they really need to anchor their generative AI apps in the actual enterprise data.” Vectors in AlloyDB is more than just pgvector Vector enabled databases are increasingly critical to enabling databases to be data stores for AI applications. While there are purpose built vector databases like Pinecone and milvus , existing database platforms such as PostgreSQL have also increasingly made efforts to support vectors. In PostgreSQL, the open-source pgvector technology is often used in the open-source database to support vectors. Some vendors such as Neon , which is a PostgreSQL compatible cloud database, have gone beyond pgvector, with Neon developing its own pg_embedding approach to supporting vectors in PostgreSQL. Gutmans explained that with AlloyDB, Google is providing AI is a ‘superset’ of capabilities on top of pgvector. For one, the vector capabilities have been integrated deeply into the AlloyDB query processing engine. “We’re probably smarter in how we execute the queries and how we optimize the queries,” said Gutmans. The other key element is added vector quantization support. Getmans explained that quantization enables AlloyDB users to significantly reduce vectors’ resource footprint in a running database, which helps improve and reduce storage costs. Alloy DB AI helps developers create vector embeddings Beyond just boosting pgvector, Gutmans emphasized that Google’s goal is to make it easier for developers to bring LLMs and enterprise data together. AlloyDB AI integrates an easy way for developers to generate vector embeddings in several approaches. One approach is via an integration with Google’s Vertex AI to create vector embeddings. Additionally, Gutmans noted that Google is integrating a series of very lightweight embeddings models into the database. Integration with the open source LangChain technology is also part of the rollout, with the goal to help developers pull together data for AI powered applications. “You should really think about [AlloyDB} as being all the different capabilities that developers need to be successful and bridging the gap between the data and LLMs,” said Gutmans. AI power comes to database migration PostgreSQL — and by extension, databases such as AlloyDB based on it — have long been positioned as potential alternatives to the Oracle database. Google has been iterating on its own database migration service for its databases over the last several years. The database migration service aims to automatically map an existing Oracle database and its functions into an AlloyDB deployment. Gutmans explained that the existing technology is a rules-based model that meets many requirements, but it doesn’t solve for all use cases. That’s where the new Duet AI in the database migration service fits in. The Duet AI in the database migration service enables developers to provide a prompt with manual hints on how they want to migrate certain parts of their Oracle database stored procedures. Gutmans said that Duet AI uses an LLM to generate the necessary code that can run across a cluster. “There’s only so much you can do with a rules-based engine to migrate Oracle stored procedures to PostgreSQL,” said Gutmans. “Duet AI is basically an AI system for folks doing code conversion for that last mile that we couldn’t actually convert automatically.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,118
2,023
"Gong Call Spotlight uses AI to summarize customer calls | VentureBeat"
"https://venturebeat.com/ai/gong-introduces-call-spotlight-a-generative-ai-summary-of-customer-calls-for-revenue-teams"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gong’s new Call Spotlight uses AI to summarize customer calls for revenue teams Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Gong , the eight-year-old company focused on making technology that streamlines workflows for revenue teams across sectors, made news earlier this summer by introducing new generative AI-powered features to its customer conversation analysis platform, including gen AI messaging suggestions. Now, it’s going a step further: The company today announced exclusively with VentureBeat that it is introducing the new feature Call Spotlight, accessible for all of its 4,000 platform customers (and counting) globally, at no extra charge. The new AI-driven tool (powered by a mix of proprietary Gong AI models and GPT-4 in Microsoft Azure OpenAI Service ) automatically transcribes and analyzes a revenue team member’s conversation with a customer over video call, audio call, mobile or desktop, even emails and text correspondence — any communications the revenue team member wants — and auto-generates a summary and key points for the revenue team to act on. Increased AI productivity with a human touch For Gong’s largely business-to-business (B2B)-focused clients, who spend lots of time prospecting customers of their own and managing customer relationships through direct correspondence, the tool is poised to offer increased productivity and efficiency. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! At the same time, Gong hopes the new feature allows revenue teams and salespeople even more time to forge and maintain the unique customer relationships that are key to landing business. “You become more efficient” using Call Spotlight, Gong chief product officer and cofounder Eilon Reshef, said in a video call with VentureBeat. “You don’t need to review the whole text. You don’t need to listen to the whole call. You don’t need to take notes. You don’t need to do anything manually.” Just enable Call Spotlight through Gong’s platform — there are modes that will prompt the other parties in any given call to agree to being recorded and analyzed by Gong’s AI — and it will take care of it for you. Unparalleled accuracy Gong says that Call Spotlight is unmatched in its accuracy, offering sales insights that are twice as reliable as generic solutions available on the market such as other consumer large language models (LLMs) , having been trained on billions of sales interactions that Gong sourced from customers. “What’s unique about Gong is that because we train the system based on revenue conversations, we get much more accurate results,” Reshef told VentureBeat. This includes a more accurate AI interpreter of specific company and product names — something other auto-transcription AI services often struggle to handle, in VentureBeat’s testing, defaulting to generic words instead of trademarks. Ask the AI anything One of Call Spotlight’s standout features is its unique “Ask Anything” function — the first of its kind tailored specifically for sales. Think of it as your personal sales coach, ready to answer any question you throw at it. Whether it’s seeking guidance on sealing a deal or understanding why a particular conversation matters to a regional account executive, Ask Anything delivers precise, context-rich advice. For instance, after a call, a sales rep might wonder, “What can I do to up my game for closing this deal?” Ask Anything churns out actionable steps based on its deep learning of a specific customer interaction, with context pulled from Gong’s extensive sales data. Similarly, if a manager wants to know whether competitors were name-dropped during a conversation, the tool can sift through the call details and flag any potential threats, allowing for targeted coaching strategies. Call Spotlight’s key features In addition to the Ask Anything feature, Call Spotlight includes: Highlights: This feature boils down the crux of a conversation into easy-to-digest bullet points and suggests next steps. It’s like a cheat sheet for sales calls, built on Gong’s in-house AI models. Outline: Imagine having call topics neatly filed and categorized, almost like chapters in a book. It automatically organizes topics discussed in a call to make it easier for the revenue team member or the entire team to grasp the customer’s needs and concerns. Call Briefs: These are condensed summaries of conversations that Gong says allow reps and managers to catch up on previous talks up to 80% faster. Automated CRM Updates: In an age when time is money, automated CRM entries save reps from the drudgery of manual data entry, allowing them to focus on what they do best — selling. Gong customers can also share any of these AI-generated products with their colleagues and managers as needed, and also push data to their customer relationship management (CRM) software of choice for recordkeeping — or not. It’s all up to the customer to decide what to do with their gen AI work products. Security remains paramount Furthermore, Gong knows that security is top-of-mind for many of its customers, and seeks to reassure them that safeguarding their data is of paramount importance while introducing all of these new gen AI features. “One of the critical elements for us is security,” said Reshef. “We have some of the Fortune top 50 companies as customers, and they are very concerned about security and unlikely to allow us to sell their data outside the company. So we hire Gong employees and do everything in-house.” Gong noted that it has been recording calls since 2016, and is GDPR-compliant and has stayed up-to-date on all relevant regulations in the years since. By marrying highly accurate, context-specific advice with a range of other features and security, Gong hopes that Call Spotlight will be a game-changer for the revenue teams that try it out. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,119
2,023
"Couchbase aims to boost developer database productivity with Capella IQ AI tool | VentureBeat"
"https://venturebeat.com/ai/couchbase-aims-to-boost-developer-database-productivity-with-capella-iq-ai-tool"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Couchbase aims to boost developer database productivity with Capella IQ AI tool Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Database vendor Couchbase today announced the launch of Capella IQ, a new AI-powered tool aimed at enhancing developer productivity when building applications on the Couchbase Capella database-as-a-service (DBaaS) cloud platform. Couchbase was originally developed as an open-source NoSQL database technology and has grown in recent years to add capabilities that are commonly found in relational database technologies. In 2021, Couchbase, Inc., went public on the NASDAQ and now trades under the symbol BASE. That same year, the company first released its Capella DBaaS platform, which has continued to expand with support on multiple cloud platforms including Google Cloud. The goal with Couchbase Capella is to provide a database platform for developers that is easier to use and manage. The launch of Capella IQ brings the power of generative AI to the platform to help developers write database code. “Think of it [Capella IQ] as a copilot for developers, using LLM [large language model] foundation models to really enhance the productivity of developers,” Matt Cain, Couchbase president and CEO, told VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How Capella IQ works to improve developer productivity The new tool fits into Couchbase’s overall four-pronged AI strategy, according to Cain. Couchbase’s AI strategy includes driving developer productivity, optimizing AI processing, enabling AI-driven applications anywhere, and complementing its technology with partnerships. Cain said that Capella IQ addresses the first pillar around developer productivity. Cain explained that Capella IQ leverages gen AI models to automate tedious development tasks like generating code snippets, sample datasets and unit tests. He noted that developers can access these capabilities directly within the Capella developer workbench through a conversational interface that is designed to have a low barrier to entry. “It’s completely aligned with how we’re thinking about our AI strategy, but really focused on helping developers be as productive as possible with Capella and enabling next-generation applications,” said Cain. With Capella IQ, Couchbase is using OpenAI’s models to help with code generation. Cain noted that Couchbase may choose to also work with other LLM providers in the future. He also emphasized that there are several capabilities in the Capella platform that help to enable the IQ feature beyond just connecting out to an LLM provider. One such feature is the Index Advisor, an existing built-in capability that is able to analyze data queries and provide users with optimization recommendations to improve database index to accelerate response time. Next on the roadmap for Couchbase is vector support While Couchbase is now jumping into the gen AI era with Capella IQ, it is still missing at least one critical element needed to help power AI applications: vector embedding support. This is an increasingly common feature on existing database platforms, with multiple vendors including DataStax , Google with AlloyDB and MongoDB announcing support in 2023. Support for vector embedding is very much on Cain’s mind, and it is part of his company’s roadmap for inclusion in the near future. He explained that vector embedding support will be enabled in the future as an extension to the platform. “Our underlying system is a multi-model caching JSON document database that performs both operational and analytical capabilities, and then we have architecturally enabled services like full text search,” he said. “With a similar architecture we can approach vector and make that a seamless aspect of our platform, where developers can not only take advantage of those capabilities, but do more with less, with a true enterprise platform.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,120
2,023
"Context raises $3.5M to elevate LLM apps with detailed analytics | VentureBeat"
"https://venturebeat.com/ai/context-raises-3-5m-to-elevate-llm-apps-with-detailed-analytics"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Context raises $3.5M to elevate LLM apps with detailed analytics Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. London-based Context , a startup providing enterprises with detailed analytics to build better large language model (LLM)-powered applications, today said it has raised $3.5 million in funding from Google Ventures, Tomasz Tunguz from Theory Ventures and other sources. Context AI said it will use the capital to grow its engineering teams and build out its platform to better serve customers. The investment comes at a time when global companies are bullish on AI and racing to implement LLMs into their internal workflows and consumer-facing applications. According to estimates from McKinsey, with this pace, generative AI technologies could add up to $4.4 trillion annually to the global economy. Developing LLM apps isn’t easy While LLMs are all the rage, building applications using them isn’t exactly a cakewalk. You have to track a model’s performance, how the application is being used, and most importantly, whether it is providing the right answers to users or not — accurate, unbiased and grounded in reality. Without these insights, the whole effort is just like flying blind with no direction to make the product better. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Henry Scott-Green, who previously worked as a product manager at Google, saw similar challenges earlier this year when working on a side project that tapped LLMs to let users chat with websites. “We talked to many product developers in the AI space and discovered that this lack of user understanding was a shared, critical challenge facing the community,” Green told VentureBeat. “Once we identified and validated the problem, we started working on a prototype [analytics] solution. That was when we decided to build Context.” Offering high-level insights Today, Context is a full-fledged product analytics platform for LLM-powered applications. The offering provides high-level insights detailing how users are engaging with an app and how the product is performing in return. This not only covers basic metrics like the volume of conversations on the application, top subjects being discussed, commonly used languages and user satisfaction ratings, but more specific tasks such as tracking specific topics (including risky ones) and transcribing entire conversations to help teams see how the application is responding in different scenarios. “We ingest message transcripts from our customers via API, and we have SDKs and a LangChain plugin that make this process [take] less than 30 minutes of work,” Green explained. “We then run machine learning workflows over the ingested transcripts to understand the end user needs and the product performance. Specifically, this means assigning topics to the ingested conversations, automatically grouping them with similar conversations, and reporting the satisfaction of users with conversations about each topic.” Ultimately, using the insights from the platform, teams can flag problem areas in their LLM products and work towards addressing them and delivering an improved offering to meet user needs. Plan to scale up Context claims to have garnered multiple paying customers since its founding four months ago, including Cognosys , Juicebox and ChartGPT , as well as several large enterprises. Citing non-disclosure agreements, Green did not share further details. With this round, the company plans to build on its effort by hiring a technical founding team, which will allow Green and his team to accelerate their development and build an even better product. “The product itself has a few planned focus areas: to build higher-quality ML systems that deliver deeper insights; to improve the user experience; and to develop alternate deployment models, where our customers can deploy our software directly in their cloud,” the CEO said. “At this stage, our goal is to continue growing our customer base while delivering value to the businesses using our product. And we’re seeing success,” he added. Growing competition As the demand for LLM-based applications grows, the number of solutions for tracking their performance is also expected to rise. Observability player Arize has already launched a solution called Phoenix , which visualizes complex LLM decision-making and flags when and where models fail, go wrong, give poor responses or incorrectly generalize. Datadog is going in the same direction and has started providing model monitoring capabilities that can analyze the behavior of a model and detect instances of hallucinations and drift based on data characteristics such as prompt and response lengths, API latencies and token counts. Green, however, emphasized that Context provides more insights than these offerings, which just flag the problem areas, and is more like web product analytics companies such as Amplitude and Mixpanel. The funding round also saw participation from 20SALES and multiple VCs and tech industry luminaries, including 20VC’s Harry Stebbings, Snyk founder Guy Podjarny, Synthesia founders Victor Riparbelli and Steffen Tjerrild, Google DeepMind’s Mehdi Ghissassi, Nested founder Matt Robinson, Deepset founder Milos Rusic and Sean Mullaney from Algolia. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,121
2,023
"Arize AI wants to improve enterprise LLMs with 'Prompt Playground,' new data analysis tools | VentureBeat"
"https://venturebeat.com/ai/arize-ai-wants-to-improve-enterprise-llms-with-prompt-playground-new-data-analysis-tools"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Arize AI wants to improve enterprise LLMs with ‘Prompt Playground,’ new data analysis tools Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. We all know enterprises are racing at varying speeds to analyze and reap the benefits of generative AI — ideally in a smart, secure and cost-effective way. Survey after survey over the last year has shown this to be true. But once an organization identifies a large language model (LLM) or several that it wishes to use, the hard work is far from over. In fact, deploying the LLM in a way that benefits an organization requires understanding the best prompts employees or customers can use to generate helpful results — otherwise it’s pretty much worthless — as well as what data to include in those prompts from the organization or user. “You can’t just take a Twitter demo [of an LLM] and put it into the real world,” Aparna Dhinakaran, cofounder and chief product officer of Arize AI, said in an exclusive video interview with VentureBeat. “It’s actually going to fail. And so how do you know where it fails? And how do you know what to improve? That’s what we focus on.” Introducing ‘Prompt Playground’ Three-year-old business-to-business (B2) machine learning (ML) software provider Arize AI would know, as it has since day one been focused on making AI more observable (less technical and more understandable) to organizations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Today, the VB Transform award-winning company announced at Google’s Cloud Next 23 conference industry-first capabilities for optimizing the performance of LLMs deployed by enterprises, including a new “Prompt Playground” for selecting between and iterating on stored prompts designed for enterprises, and a new retrieval augmented generation (RAG) workflow to help organizations understand what data of theirs would be helpful to include in an LLMs responses. Almost a year ago, Arize debuted its initial platform in the Google Cloud Marketplace. Now it is augmenting its presence there with these powerful new features for its enterprise customers. Prompt Playground and new workflows Arize’s new prompt engineering workflows, including Prompt Playground, enable teams to uncover poorly performing prompt templates, iterate on them in real time and verify improved LLM outputs before deployment. Prompt analysis is an important but often overlooked part of troubleshooting an LLM’s performance, which can simply be boosted by testing different prompt templates or iterating on one for better responses. With these new workflows, teams can easily: Uncover responses with poor user feedback or evaluation scores Identify the underlying prompt template associated with poor responses Iterate on the existing prompt template to improve coverage of edge cases Compare responses across prompt templates in the Prompt Playground prior to implementation As Dhinakaran explained, prompt engineering is absolutely key to staying competitive with LLMs in the market today. The company’s new prompt analysis and iteration workflows help teams ensure their prompts cover necessary use cases and potential edge scenarios that may come up with real users. “You’ve got to make sure that the prompt you’re putting into your model is pretty damn good to stay competitive,” said Dhinakaran. “What we launched helps teams engineer better prompts for better performance. That’s as simple as it is: We help you focus on making sure that that prompt is performant and covers all of these cases that you need it to handle.” Understanding private data For example, prompts for an education LLM chatbot need to ensure no inappropriate responses, while customer service prompts should cover potential edge cases and nuances around services offered or not offered. Arize is also providing the industry’s first insights into the private or contextual data that influences LLM outputs — what Dhinakaran called the “secret sauce” companies provide. The company uniquely analyzes embeddings to evaluate the relevance of private data fused into prompts. “What we rolled out is a way for AI teams to now monitor, look at their prompts, make it better and then specifically understand the private data that’s now being put into those those prompts, because the private data part makes sense,” Dhinakaran said. Dhinakaran told VentureBeat that enterprises can deploy its solutions on premises for security reasons, and that they are SOC-2 compliant. The importance of private organizational data These new capabilities enable examination of whether the right context is present in prompts to handle real user queries. Teams can identify areas where they may need to add more content around common questions lacking coverage in the current knowledge base. “No one else out there is really focusing on troubleshooting this private data, which is really like the secret sauce that companies have to influence the prompt,” Dhinakaran noted. Arize also launched complementary workflows using search and retrieval to help teams troubleshoot issues stemming from the retrieval component of RAG models. These workflows will empower teams to pinpoint where they may need to add additional context into their knowledge base, identify cases where retrieval failed to surface the most relevant information, and ultimately understand why their LLM may have hallucinated or generated suboptimal responses. Understanding context and relevance — and where they are lacking Dhinakaran gave an example of how Arize looks at query and knowledge base embeddings to uncover irrelevant retrieved documents that may have led to a faulty response. “You can click on, let’s say, a user question in our product, and it’ll show you all of the relevant documents that it could have pulled, and which one it did finally pull to actually use in the response,” Dhinakaran explained. Then “you can see where the model may have hallucinated or provided suboptimal responses based on deficiencies in the knowledge base.” This end-to-end observability and troubleshooting of prompts, private data and retrieval is designed to help teams optimize LLMs responsibly after initial deployment, when models invariably struggle to handle real-world variability. Dhinakaran summarized Arize’s focus: “We’re not just a day one solution; we help you actually ongoing get it to work.” The company aims to provide the monitoring and debugging capabilities organizations are missing, so they can continuously improve their LLMs post-deployment. This allows them to move past theoretical value to real-world impact across industries. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,122
2,023
"AI21 Labs raises $155M to accelerate generative AI for enterprises | VentureBeat"
"https://venturebeat.com/ai/ai21-labs-raises-155m-to-accelerate-genai-for-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI21 Labs raises $155M to accelerate generative AI for enterprises Share on Facebook Share on X Share on LinkedIn Credit: Roei Shor Photography/A21 Labs Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Tel Aviv, Israel-based large language model (LLM) leader AI21 Labs confirmed with VentureBeat that it has has closed $155 million in series C funding to accelerate the growth of its text-based generative AI services for enterprises. The company is now valued at $1.4 billion. Investors in the round include Walden Catalyst, Pitango, SCB10X, b2venture, Samsung Next and Amnon Shashua with participation from Google and Nvidia. AI21 is often cited as a rival to OpenAI Founded in 2017 by AI pioneers and technology veterans Amnon Shasuha, Yoav Shoham and Ori Goshen, AI21 Labs may have been one of the first to bring gen AI to the masses, but it has also spent the past year chasing LLM rivals like OpenAI to commercial applications. After a $64 million series B round last year, Shoham, an emeritus professor of AI at Stanford University, told VentureBeat that he recognized that the funding landscape was tightening and more LLMs and multimodal models were being launched every day. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! He said the company was “very aware of the environment and not complacent in any way.” AI21’s proprietary Jurassic-2 foundation models are considered some of the world’s largest and most sophisticated LLMs. Jurassic-2 powers AI21 Studio, a developer platform for building custom text-based business applications off of AI21’s language models; and Wordtune, a multilingual reading and writing AI assistant for professionals and consumers. AI21 develops and owns foundation models that serve as platform AI21 chairman Shashua said in a press release: “AI21 Labs is a pure play in AI as it develops and owns foundation models which are served as a platform to developers and enterprises, while developing derivatives such as Wordtune directly to end users. The current round fuels the growth of the company to reach its goal of developing the next level of AI with the capabilities of reasoning across many domains. We believe that the impact of AI21 Labs growth plans would be of a global scale and quite soon.” Jensen Huang, founder and CEO of Nvidia, also shouted out AI21’s work in the press release: “Generative AI is driving a new era of computing across every industry,” he said. “The innovative work by the AI21 Labs team will help enterprises accelerate productivity and efficiency with generative AI-based systems that are accurate, trustworthy and reliable.” AI21 has recently collaborated with customers in diverse sectors, including Carrefour, Clarivate, eBay, Guesty, Monday.com and Ubisoft. The company was also named on the first ever CB Insights GenAI 50. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,123
2,023
"AI to star in the launch of Webflow's built-in app ecosystem | VentureBeat"
"https://venturebeat.com/ai/ai-to-star-in-the-launch-of-webflows-built-in-app-ecosystem"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI to star in the launch of Webflow’s built-in app ecosystem Share on Facebook Share on X Share on LinkedIn Credit: Webflow Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Webflow , the low-code web development platform, has fully opened up its new app ecosystem, establishing a platform where third-party developers can integrate their applications directly into the Webflow designer. Webflow CTO Allan Leinwand shared insights about the vision for this ecosystem in an exclusive interview with VentureBeat, explaining that new APIs will allow apps to have a visible presence directly on the Webflow designer canvas. Additionally, Webflow has enhanced their backend APIs so apps can interact with Webflow forms, content and other data. The initiative has been a year in the making, with the aim to expose more functionality of Webflow designer and data models for developers to build deeply integrated apps. “We have about 200,000 designers and companies that use Webflow and visual design to create these really fully customized professional websites without needing to code,” said Leinwand. “Part of that is we know we can’t write every piece of functionality for everyone else’s designers and companies. So we’ve been working on exposing the surface area of the designer, and of our core data models, for about a year now.” Low-code creation is the future Leinwand believes this combination of front-end and back-end access will be potent, as apps can put functionality right in front of designers while also connecting to Webflow sites. Drawing parallels to successful app marketplaces like Shopify (where he was former CTO), Leinwand envisions Webflow becoming a similar platform for designers. The ecosystem is open to all developers, whether targeting niche markets or larger ones. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Developers realize that no-code development is the future, and they realize that people are moving into a no-code environment,” said Leinwand. “With generative AI, it even allows you to turn that up a notch, to really help generate some of that content and generate some of that design in an amazing way.” Joining big names like Hubspot, Unspalsh and Typeform, one of the first apps to launch with this new ecosystem is Jasper AI , a marketing-focused AI writing assistant. Jasper AI will be able to generate relevant and contextual marketing copy content like blog posts or product descriptions within the Webflow designer, with changes saved directly into the Webflow backend. According to Leinwand, this seamless integration exemplifies how apps can leverage AI and other technologies to enhance the designer experience. For developers looking to contribute to the ecosystem, Leinwand suggests focusing on addressing unmet needs within the design market using both the new frontend and backend APIs. As the Webflow ecosystem is just beginning, developers have the opportunity to get involved early. Leinwand invites developers to bring their ideas and start developing on the platform. AI-powered marketing tools a natural fit in new ecosystem In a call with VentureBeat, Jasper AI President Shane Orlick discussed the company’s partnership with Webflow, emphasizing how gen AI is empowering creators. Jasper AI’s content creation platform, which businesses can use to generate high-quality marketing and advertising drafts at scale, is now accessible directly within the Webflow platform thanks to its API integration. This enables Webflow users to generate content directly in the app while building websites. “It was really quick to get that vision lock,” Orlick said about early conversations about integrating the Jasper AI platform in Webflow. “We didn’t have the API. We had just started thinking about this. So once we had the API, it was really easy, because we actually have a solution that would fit nicely in their marketplace.” Orlick believes that this partnership delivers more value for Webflow and drives adoption and engagement with their customers. Such partnerships enable the company to reach a new client-base, reducing the friction of AI adoption. “We just want to meet the customers where they are and deliver the better experience,” said Orlick. “And because we’re only the app layer, we’re not raising $500 million to blow into training models [so] we’re able to just focus on the customer experience piece. That’s why Webflow is so exciting.” Looking ahead, Orlick sees significant opportunities in serving large enterprise customers through customized AI templates, style guides and collaborative workflows. Yet, self-service and API-partner channels remain vital for driving leads to their core business. Orlick underscored how gen AI is transitioning from a novelty to an essential productivity tool. Partnerships like Webflow and Jasper that embed AI directly into creative workflows promise to unlock its full potential for both businesses and individuals. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,124
2,023
"SentinelOne unveils cloud security products for Amazon S3, NetApp | VentureBeat"
"https://venturebeat.com/security/sentinelone-unveils-cloud-data-security-products-for-amazon-s3-netapp"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SentinelOne unveils cloud data security products for Amazon S3, NetApp Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. SentinelOne , the autonomous cybersecurity company, recently unveiled its cloud data security product line, featuring two products: threat detection for Amazon S3 and threat detection for NetApp. The company said these “high-speed malware detection” solutions are specifically tailored to protect organizations that use Amazon S3 object storage and NetApp file storage from evolving malware threats in their cloud environments and enterprise networks. SentinelOne asserts that the latest offerings further strengthen the company’s Singularity Cloud product family, complementing SentinelOne’s existing cloud workload security product line. This expansion aims to give customers the ability to detect, investigate and proactively mitigate threats across a diverse range of cloud environments, including public, private and hybrid clouds. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Malware on the Move As businesses increasingly adopt cloud-based solutions, they become more susceptible to sophisticated malware attacks. To address this pressing challenge, SentinelOne said it is intensifying its focus on cloud capabilities and embracing a modern approach to protecting cloud storage and workloads from malware. The new threat detection for NetApp and Amazon S3 products can automatically scan every file added to these two storage platforms for file-borne and zero-day malware, detecting and quarantining malicious files in real time. “Adversaries are generating increasingly sophisticated malware attacks using generative AI , and as reported by the cloud providers themselves, cloud storage is an increasingly used delivery channel for delivering them,” Ely Kahn, vice president of product management, cloud security, and AI/ML products at SentinelOne, told VentureBeat. “Our cloud data security products bring AI-powered threat detection to cloud storage, enabling businesses to automatically detect malware hiding within it in a modern way.” According to Kahn, many cloud data protection solutions rely solely on signature-based approaches. In contrast, he said, SentinelOne adopts a hybrid approach, using both signature and non-signature-based methods driven by the companies proprietary AI detection engines. Additionally, the platform includes a unique “protect mode,” empowering customers to configure automatic quarantine for malicious files and objects, a feature lacking in competitors’ products that only offer a detect mode. “Our new products can scan new files/objects in milliseconds, and our customers tell us we are three times faster than anything else they have tested,” Kahn told VentureBeat. “While many competitors require customer data to be pulled into their cloud environment to be scanned, we ensure no customer files/objects never leave their cloud environment, supporting privacy and data sovereignty needs.” Leveraging AI to detect cloud threats in real time SentinelOne emphasizes that the cornerstone of its new security products lies in their proprietary Static AI engine. Unlike traditional methods, this AI engine does not rely on signatures for malware detection. The company explained that the AI engine has undergone extensive training on hundreds of millions of malware samples, enabling it to adeptly detect unknown malware, including malware linked to zero-day exploits. The engine also possesses a native understanding of typical attributes found in malware files. Cloud security operations have historically been segregated from a company’s overall security operations. Kahn asserts that as companies gain a better understanding of cloud security, they aim to consolidate all threat management, including for both cloud and data-related threats, into a unified process. Kahn said the company collaborated closely with NetApp and Amazon Web Services to ensure seamless integration of SentinelOne’s offerings with the storage solutions, resulting in an optimal combination of security and performance for their shared customers. “The reconfigurability capabilities allow customers to decide whether they want threat detection coverage across all their S3 buckets or just certain ones in certain accounts. Customers can also decide if they want some accounts or buckets configured in protect mode and others in detect mode,” he explained. “If there are certain buckets with highly sensitive operational workloads, the customer can configure those with detect mode and all others with our protect mode.” Kahn asserted that the solutions represent a significant step forward in SentinelOne’s mission to help customers prevent tomorrow’s attacks today, but that they are just a first step. “AI is going to supercharge the threat landscape, and we will continue to leverage it to deliver additional cloud workload security and cloud data security products that organizations can use to detect and prevent the spread of malware across their cloud environments and enterprise networks, as they emerge with machine speed,” Kahn told VentureBeat. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,125
2,023
"IBM study reveals how AI, automation protect enterprises against data breaches | VentureBeat"
"https://venturebeat.com/security/ibm-study-reveals-how-ai-automation-protect-enterprises-against-data-breaches"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM study reveals how AI, automation protect enterprises against data breaches Share on Facebook Share on X Share on LinkedIn Image Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The more integrated AI, automation and threat intelligence are across tech stacks and SecOps teams, the stronger they make an enterprise against breaches. Follow-on benefits include greater cyber-resilience, and spending less on data breaches than enterprises with no AI or automation defenses at all. IBM Security’s 2023 Cost of a Data Breach Report provides compelling evidence that investing in AI, automation and threat intelligence delivers shorter breach lifecycles, lower breach costs and a stronger, more resilient security posture company-wide. The report is based on analysis of 553 actual breaches between March 2022 and March 2023. The findings are good news for CISOs and their teams, many of whom are short-staffed and juggling multiple priorities, balancing support for new business initiatives while protecting virtual workforces. As IBM found, the average total cost of a data breach reached an all-time high of $4.45 million globally, representing a 15% increase over the last three years. There’s the added pressure to identify and contain a breach faster. IBM’s Institute for Business Value study of AI and automation in cybersecurity also finds that enterprises using AI as part of their broader cybersecurity strategy concentrate on gaining a more holistic view of their digital landscapes. Thirty-five percent are applying AI and automation to discover endpoints and improve how they manage assets, a use case they predict will increase by 50% in three years. Endpoints are the perfect use case for applying AI to breaches because of the proliferating number of new identities on every endpoint. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why AI needs to be cybersecurity’s new DNA Scanning public cloud instances for gaps in cloud security (including misconfigurations), inventing new malware and ransomware strains and using generative AI and ChatGPT to fine-tune social engineering and pretexting attacks are just a few of the ways attackers try to evade being detected. Cybercrime gangs and sophisticated advanced persistent threat (APT) groups actively recruit AI and machine learning (ML) specialists to design their Large Language Models (LLM) while also looking for new ways to corrupt model data and invent malware capable of evading the current generation of threat detection and response systems starting with endpoints. CISOs need AI, ML, automation and threat intelligence tools if they’re going to have a chance of staying at competitive parity with attackers. IBM’s report provides compelling evidence that AI is delivering results and needs to be the new DNA of cybersecurity. Integrating AI and automation reduced the breach lifecycle by 33% or 108 days IBM found that enterprises that advanced their integration of AI and automation into SecOps teams to the platform level are reducing breach lifecycles by one-third, or 108 days. That’s a significant drop from an average of 214 days. The average breach lasts 322 days when an organization isn’t using AI or automation to improve detection and response. Extensive use of AI and automation resulted in 33.6% cost savings for the average data breach. Integrating AI and automation across a tech stack to gain visibility, detection and achieve real-time response to potential intrusions and breaches pays off. Organizations with no AI or automation in place to identify and act on intrusions and beaches had an average breach cost of $5.36 million. Enterprises with extensive AI and automation integration supporting their SecOps teams, tech stack and cyber-resilience strategies experienced far less expensive breaches. The average cost of a breach with extensive AI and automation in place averaged $3.6 million. That’s a compelling enough cost savings to build a business case around. Despite the advantages, just 28% of enterprises are extensively integrating AI and automation Given the gains AI and automation deliver, it’s surprising that nearly one-third of enterprises surveyed have adopted these new technologies. IBM’s team also found that 33% had limited use across just one or two security operations. That leaves 4 in 10 enterprises relying on current and legacy generation systems that attackers have fine-tuned their tradecraft to evade. In another study, 71% of all intrusions indexed by CrowdStrike Threat Graph were malware-free. Attackers quickly capitalize on any gap or weakness they discover, with privileged access credentials and identities being a primary target, a key research finding from CrowdStrike’s Falcon OverWatch Threat Hunting Report. Attackers increasingly use AI to evade detection and are focused on stealing cloud identities, credentials and data, according to the report. This further shows the need for intelligent AI-driven cybersecurity tools. Gartner’s 2022 Innovation Insight for Attack Surface Management report predicts that by 2026, 20% of companies (versus 1% in 2022) will have a high level of visibility (95% or more) of all their assets, prioritized by risk and control coverage. Gartner contends that cyber asset attack surface management ( CAASM ) is necessary to bring an integrated, more unified view of cyber assets to SecOps and IT teams, CAASM stresses the need for integration at scale with secured APIs. IBM’s study shows that SecOps teams are still losing the AI war. The majority of SecOps teams are still relying on manual processes and have yet to adopt automation or AI significantly, according to the report. There is a major disconnect between executives’ intentions for adopting AI to improve cybersecurity and what’s happening. Ninety-three percent of IT executives say they are already using or considering implementing AI and ML to strengthen their cybersecurity tech stacks, while 28% have adopted these technologies. Meanwhile, attackers are successfully recruiting AI, ML and generative AI experts who can overwhelm an attack surface at machine speed and scale, launching everything from DDOS to using living-off-the-land (LOTL) techniques that rely on Powershell, PsExec, Windows Management Interface (WMI) and other common tools to avoid detection while launching attacks. “While extortion has mostly been associated with ransomware, campaigns have included a variety of other methods to apply pressure on their targets,” writes Chris Caridi, cyber threat analyst for IBM Security Threat Intelligence. “And these include DDoS attacks, encrypting data, and more recently, some double and triple extortion threats, combining several of the previously seen elements.” This should also be considered with the proliferation of deepfakes. Zscaler CEO Jay Chaudhry was the recent target of a deep fake attack. Chaudhry told the audience at Zenith Live 2023 about one recent incident in which an attacker used a deepfake of his voice to extort funds from the company’s India-based operations. In a recent interview , Chaudhry said, “This was an example of where they [the attackers] actually simulated my voice, my sound … more and more impersonation of sound is happening, but you will [also] see more and more impersonation of looks and feels.” Deepfakes have become so commonplace that the Department of Homeland Security has issued the guide Increasing Threats of Deepfake Identities. AI discovers anomalies at scale and machine-level speeds AI and automation deliver measurable results in improving security personalization while enforcing least privileged access. SecOps teams with an integrated AI and automation tech stack are faster at identifying and taking action on anomalies that could indicate an intrusion or breach. AI and ML excel at analyzing massive volumes of system and user activity data that power threat intelligence systems. IBM found that when a threat intelligence system has real-time data analyzed by AI and ML algorithms, the time to identify a breach is reduced by 28 days on average. Breaches cost less if SecOps teams find them first AI also pays off by helping SecOps teams identify the breach themselves versus waiting for an attacker to announce the break or having law enforcement inform them. When SecOps teams can identify the breach, they save nearly $1 million. The study also compared mean-time-to-identify (MTTI) and mean-time-to-contain (MTTC), finding that extensive integration of AI and automation reduced both. Keep AI, automation, and threat intelligence in the context of zero trust Zero trust assumes a breach has already happened, and every threat surface needs to be continually monitored and secured. As the IBM study shows, AI, ML and automation are proving effective in providing real-time threat intelligence. During a recent interview with VentureBeat, zero trust creator John Kindervag advised that “you start with a protect surface. I have, and if you haven’t seen it, it’s called the zero-trust learning curve. You don’t start with technology, and that’s the misunderstanding of this. Of course, the vendors want to sell the technology, so [they say] you need to start with our technology. None of that is true. You start with a protect surface, and then you figure out [the technology].” Kindervag’s advice is well taken and reflects how effective AI, ML, automation and threat intelligence can be deployed and deliver results at scale. Kept in a zero trust context of protecting one threat surface at a time, as Kindervag advises, these technologies deliver value. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,126
2,023
"How CISOs can engage the C-suite and Board to manage and address cyber risk | VentureBeat"
"https://venturebeat.com/security/how-cisos-can-engage-the-c-suite-and-board-to-manage-and-address-cyber-risk"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How CISOs can engage the C-suite and Board to manage and address cyber risk Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The modern Chief Information Security Officer (CISO) has a difficult job. Amidst the myriad of malicious cyber threats attempting to infiltrate their organization, CISOs must also effectively navigate other murky waters: Engaging their C-suite and governing counterparts on matters of cybersecurity. It’s a tall task for which decades of technical training and programmatic cyber expertise alone are insufficient preparation. The Securities and Exchange Commission (SEC) finalized new cybersecurity regulations on July 26 that require public companies to disclose cybersecurity breaches within four days, as well as raise their Board’s level of cyber expertise and oversight of managing and assessing cyber risk. The agency proposed these regulations in 2022 and the final decision is expected to come in October 2023. Now more than ever, CISOs should be well-positioned to inform and engage fellow leaders as organizations invest in digital transformation at scale. Seeking out the latest and greatest technologies The hyper-competitive landscape of our digitalized enterprise world drives organizational leaders on a continuous search for the latest and greatest innovative technologies that can elevate them above the pack. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These technologies have evolved exponentially over the various eras of computing. It started with the centralized mainframe, then transitioned to microcomputers and PCs in the 1990s. Then came the internet era, the subsequent mobile device revolution of the 2000s and expansion into the cloud throughout the 2010s. We’ve now entered another transformative era: The current arms race of generative AI and machine learning (ML) that, albeit exciting, has ushered in a wide range of new operational risks for CISOs to manage. Knowing when to say yes The march to streamline business-critical functions, alleviate bottlenecks, and improve operational efficiency makes digital transformation a top priority for every organization. When revenue and customer satisfaction are on the line, adopting new technologies and understanding the cyber risk associated with them is imperative. For CISOs to be true business partners, it’s not feasible to say “no” to every new opportunity. Knowing how and when to say “yes” without jeopardizing the organization’s security posture can be tricky. This heightens the importance of understanding how to simplify cyber risk to the C-suite and Board in a manner that fosters a collective understanding of its criticality. The role of the CISO is no longer to be a tactical facilitator or pure technologist. It’s about being a transformative leader that tightens the gap between the organization’s cybersecurity and business operations to help drive market adoption and sustained success. Engaging the C-suite: Aligning cyber risk to business goals Effectively engaging the C-suite is based upon simplifying the connection between cyber risk and business risk. This requires deciphering the impact of a cyberattack in a way that doesn’t portray a doomsday narrative, but still clearly outlines the severe ramifications it could pose on fundamental business goals. For a conversation with the CFO, that link could be financial losses associated with operational downtime caused by a ransomware event. For the CMO, it could be brand reputational damage after customer personally identifiable information (PII) data was leaked. For the COO, it could be a business disruption following a supply chain breach. The true name of the game is conveying the implications of inaction, tying it back to outcomes that carry the most meaning in the eyes of respective leaders. Because let’s face it, conversations around the intricacies of extended detection and response (XDR) solutions, exfiltration and Distributed Denial of Service (DDoS) attacks are never going to fully resonate with a non-technical audience. And, by extension, it can also come across as belittling without the CISO actually realizing it, further exasperating the complexity of the cyber threat landscape. Engaging the board: Building trust and confidence As the nature of cyber threats continues to evolve, so too is the regulatory landscape around overarching cyber risk. With the new SEC regulations in play, boardrooms are finally beginning to embrace the urgency of cyber resilience in a digital age — making heightened commitments to equipping organizations with the right resources to proactively safeguard data and defend themselves. The ripple effect of this paradigm shift is that security leaders are now getting tapped by their Boards for insight and counsel more than ever before. A CAP Group Study earlier this year found that 90% of companies in the Russell 3000 index lacked a single director with the necessary cyber expertise. In turn, CISOs are being called upon to establish and maintain an open line of communication across the boardroom. Quick and continuous updates Considering stringent compliance requirements will soon be in play, the Board needs quick and continuous updates on the cyber threat landscape. Effective engagement in this context requires a firm understanding of the ultimate end goal. It’s not so much a matter of asking the main governing body of the organization for cyber budgeting or approvals. That’s usually for the C-suite to decide. Rather, it’s a petition to trust that the organization is well-positioned to steward itself through a cyber crisis and mitigate its fallout in compliance with corresponding regulations. Time is of the essence in boardroom settings — CISOs often only have 15 to 30 minutes to make their case. So, do away with the extensive PowerPoint decks and lengthy presentations and instead leverage impactful storytelling techniques and logical real-world examples that draw emotion. It’s not just about vocalizing cyber risk. It’s about making them feel the impact of it. Frank Kim is a SANS Institute Fellow and CISO-in-Residence at YL Ventures. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,127
2,023
"Switchboard announces new $7M funding for data product platform | VentureBeat"
"https://venturebeat.com/enterprise-analytics/switchboard-led-by-google-bigquery-veterans-announces-7m-series-a-funding-for-data-product-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Switchboard, led by Google BigQuery veterans, announces $7M series A funding for data product platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Switchboard , a company focused on data product development for business users, is announcing today a $7 million series A round of funding led by GFT Ventures and Quest Venture Partners. Switchboard, founded by Google BigQuery launch team members Ju-kay Kwek and Michael Manoochehri, offers a platform designed for the creation and operation of data products that does not require the talents of specialized data teams. The company’s goal is to level the playing field so organizations beyond the tech giants can own and wield their data as competitive assets. Switchboard cites such companies as DotDashMeredith , OrangeTheory Fitness , and the Financial Times as using its platform to do just that. Data products, a concept popularized by the “data mesh” approach to business domain-oriented data and analytics, are applications, APIs, data feeds or simple datasets that are owned, supported and internally marketed by enterprise business units, and made available to other such groups. Developing data products has, for the most part, required the work of data engineers and analytics specialists, a fact that has stymied the decentralized approach to data-driven business that the data mesh paradigm seeks to enable. It’s that problem space that Switchboard seeks to address. Essential reading: How data mesh is turning the tide on getting real business value from data Data mesh: What it is and why you should care Data fabric versus data mesh: What’s the difference? Democratizing data product development Switchboard’s unique value proposition is making data product development accessible to the business users who typically possess the contextual knowledge of the data in the first place. Additionally, Switchboard is a SaaS platform that avoids the need to provision and manage dedicated infrastructure, and it’s focused on production hosting and management of the data products, in addition to their development and testing. That enables business users to handle the entire lifecycle of data products, allowing highly specialized data engineering teams to focus on more strategic, enterprise-wide projects and efforts. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Switchboard is not just a tech platform — it’s focused on specific business use cases, with a template- and automation-based approach to implementing them. To begin with, the company is concentrating on digital media revenue and marketing scenarios and offers detailed templates for digital media campaign analytics. Switchboard plans to move on to other domains as the platform establishes itself. Building blocks The gist of Switchboard’s approach is to encapsulate industry best practices, and to bundle in SOC-compliant data management; proactive insights through such assets as out-of-the-box dashboards; and support for standard data sources paired with a high degree of customizability. Switchboard facilitates data transformation, blending, cleansing, verification and normalization, and can publish data products to cloud data warehouse platforms including Snowflake , Amazon Redshift and, of course, the Google BigQuery platform that the company’s founders helped launch. Also read: Databricks vs Snowflake: The race to build one-stop-shop for your data 5 biggest announcements from AWS re:Invent Google advances AlloyDB, BigQuery at Data Cloud and AI Summit The Switchboard platform allows business users to encode their requirements and proprietary business rules as low-code polices; stores all necessary data source credentials in an encrypted key store; and facilitates authoring and re-use of data “recipes” (shown in the image at top of this post) for acquiring the required data. Switchboard also provides consolidated management of all relevant data sources and partner systems, including monitoring operational activity and providing centralized exception reporting. Switchboard says it will use the funding round monies to invest in the platform and enable business users to do more with features enabled by machine learning. A next phase is to take the data products the company has worked on with some of its more advanced customers, and turn them into new templated data products included in the system, joining the digital media RevOps, campaign analytics, and programmatic reporting solutions that are already there. Power to the business The notion of putting data product development and even production DataOps into the hands of business users is a break with entrenched precedents. But even low-code specification of business rules, and understanding which data feeds to use and how to connect with them, is not for the faint of heart. Cofounder Kwek told me Switchboard typically dispatches a small team of “success engineers” (who are essentially data scientists) to its new customers, who typically take four to eight weeks to get up and running. But even if some professional services and power-user skills are needed, leveraging automation and reusable business process specs to enable self-service data product development is both shrewd and laudable. If the approach works, it’s something other analytics platform vendors would do well to adopt. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,128
2,023
"Generative AI and the path to predictive analytics | VentureBeat"
"https://venturebeat.com/enterprise-analytics/path-to-predictive-analytics-generative-ai-paving-way-immersive-data-insights"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Predictive analytics: How generative AI is paving the way for immersive insights Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It appears all but certain that generative AI , or one of its leading products, such as ChatGPT, will become the technological buzzword of the year for 2023. The rapid development and rollout of these advanced artificial intelligence programs have been both astonishing and worrisome for those fearing the dangers of growth that outpaces regulation. While it’s impossible to predict where generative AI will lead us, it already appears to be driving significant change in the realm of analytics. At an enterprise level, generative AI possesses the potential to counter significant bottlenecks in what organizations and teams alike can accomplish, even when facing stringent deadlines. Artificial intelligence is also, theoretically at least, free of the biases and cognitive difficulties that humans can experience in forming and testing ideas at scale. This notion, however, has been contested due to human bias that could influence the datasets that AI uses. Away from this, there’s little contesting the time- and resource-saving qualities of generative AI and the insights that it’s capable of producing. While a major drawback of big data is that humans simply cannot interpret thousands of pages of information at a rapid pace, AI can not only ingest it in an instant but interpret key points and metrics to deliver immersive data insights for users to consume. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Generative AI’s potential is such that Goldman Sachs estimates that the technology could deliver a 7% boost to global GDP over the course of the next ten years while also lifting productivity growth by 1.5 percentage points. >>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands. << For business leaders, generative AI and predictive analytics are set to become a partnership that’s impossible to ignore. With many firms already actively undergoing digital transformation, the incorporation of artificial intelligence represents a major step towards keeping heads and shoulders above the mire of a hyper-competitive landscape. The path to predictive analytics For businesses seeking to optimize their inventory throughout the year, generative AI is an essential component in powering projections concerning vital customer data. This helps to better budget stock and work more efficiently with supply chains. As the technology matures, businesses will be able to use the technology to analyze large datasets and spot trends that they can use to predict future customer demand or changing consumer preferences. One of the strongest examples of generative AI leveraging predictive analytics today can be found in the events industry. Software firms like Grip and Superlinked have created services that use predictive AI to help event organizers make data-driven decisions about the different aspects of events. Here, these firms have used generative AI in analyzing attendee data from past events to gain insights for future events. We can liken this process to Google Trends , which can use search data to show when certain terms are being queried more frequently. Generative AI models can take similar indicators of audience sentiment, like which individual areas of events have drawn larger crowds and which individual speakers or performers have generated the most interest online, and consider vast arrays of big data to draw concrete analytics. With the arrival of predictive analytics, businesses will have the power to look beyond sentiment and to consider metadata surrounding specific conversions, popular locations, advanced weather forecasts, variations in social media sentiment, and possible confounding external factors to deliver a comprehensive analysis of exactly what, when and where demand is likely to emerge. We’ve already seen firms like JetBlue, a U.S. airline, partnering with ASAPP, a technology vendor, in implementing an AI-based customer service solution that can save an average of 280 seconds per chat , paving the way for saving 73,000 hours of agents’ time per quarter. This platform will one day be capable of learning from customer sentiment and the recurrence of queries to make actionable recommendations to decision-makers regarding processes and the acquisition of stock. Predictive analytics: The next generation of data analytics Having the ability to analyze vast quantities of big data isn’t “generative” by definition, but this part comes into play when generative AI models like ChatGPT use data to create software code that can build deep analytic models. According to GitHub data, 88% of surveyed respondents believe that they’re more productive using GitHub Copilot, an analytical tool that’s built on OpenAI’s Codex. Furthermore, 96% of respondents believe that the process makes them “faster with repetitive tasks.” This will invariably be an invaluable tool for business leaders to generate far more focused data analytics through automated coding. For instance, AI programs have the ability to deliver “automated decision support,” which makes recommendations based on masses of big data. In the future, programs will monitor the output and possible areas of employee skillsets that may require improvement and autonomously develop bespoke training programs designed to specifically strengthen these areas based on the employees’ most receptive learning styles. Programs could also work in tandem with other sprawling analytical platforms, such as Google Analytics (GA) or Finteza, and use their insights to make automatic tweaks and improvements to company websites based on traffic and performance insights, as well as forecast future traffic. In addition to this, if a generative AI program learns from GA’s or Finteza’s analytical data that visitor figures have fallen at a time when social media sentiment and seasonal trends indicate that increased engagement should occur, the program could study the issue and make corrections accordingly, while notifying relevant parties or web developers of any changes for subsequent review. ChatGPT, for instance, is currently being used a lot for content creation. However, it does come with limitations. For example, below is an example of content generated by ChatGPT. The first article is titled, “4 Ways To Recycle Your Glasses,” the second, “How To Recycle Your Glasses.” While both pieces have very similar headlines, the approach to writing the article and the points discussed should vary quite a lot (in real life, at least). Yet, in the case of ChatGPT, both articles are very similar — identical in some instances: As you can see, some content is pretty much identical. Hence, once more than one person opts to use ChatGPT for a similar headline, the issue of duplicate content will arise pretty much immediately. This is expected simply because no generative AI can live the lives of thousands of people and experience all of the possible scenarios based on very different life events, situations, personal experiences, characters and habits that human beings possess. All of these factors affect how people write content, the language they use, their writing style and the examples they use. Based on this, we can expect to see businesses take on a far more assistive role in realizing the potential of a data-driven future for businesses. Instead of using platforms like ChatGPT to work on our behalf, these programs can support our business decisions — even if those decisions stem from the example above, whereby generative AI can offer comprehensive discussion points to support content plans. Prioritizing privacy Although the regulatory framework surrounding the growth of generative AI and predictive analytics is still subject to development, early signs suggest that the technology can bring key innovations in the age of GDPR. This is because generative AI has the ability to anonymize sensitive data before it’s viewed by human eyes. This empowers predictive analytical tools to generate synthetic data that mimics real datasets without containing any identifiable information. >>Follow VentureBeat’s ongoing generative AI coverage<< Likewise, the software could automatically add and remove identifiable parameters within data, which could help in industries like pharmaceuticals, where drug trials operate on a blind and double-blind basis. This represents another major opportunity for businesses seeking to tap into generative AI. Through the creation of privacy-oriented algorithms that protect sensitive information while empowering organizations to analyze the available insights, more firms can act decisively in improving the customer experience. The greatest business opportunity of the 21st century? While there’s certainly plenty of work still to be done in terms of creating a regulatory framework to ensure that generative AI grows in a sustainable manner, the potential utility of the technology in the field of predictive analytics is certainly a cause for optimism. Because of generative AI’s ability to act decisively in using big data to offer actionable insights, it’s imperative that businesses move to access this potential before they lose ground in the battle for breathing room among companies undergoing digital transformation. As well as a significant time-saving tool, generative AI-powered predictive analytics can help organizations gain more immersive insights into performance, which can lead to vast operational improvements. Although the technology may need more time to mature in the short term, its future utility can bring significant cost and productivity benefits throughout virtually every industry. Dmytro Spilka is the head wizard at Solvid. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,129
2,023
"Cisco AppDynamics: 85% of tech experts say application observability is a strategic priority for managing cloud complexity | VentureBeat"
"https://venturebeat.com/data-infrastructure/cisco-appdynamics-85-of-tech-experts-say-application-observability-is-a-strategic-priority-for-managing-cloud-complexity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Cisco AppDynamics: 85% of tech experts say application observability is a strategic priority for managing cloud complexity Share on Facebook Share on X Share on LinkedIn Presented by Cisco AppDynamics Cloud native technologies might have improved speed to innovation, and offer greater agility, reliability and scalability — but these modern application architectures are posing serious challenges for IT departments across industries, according to recent research by Cisco AppDynamics for its latest report, “ The Age of Application Observability. ” Technologists say that 49% of their new innovation initiatives are being delivered with cloud native technologies. Cloud adoption promises to be aggressive across industries, with IT leaders expecting this figure to climb to 58% over the next five years. That means that the majority of new digital transformation programs will be built on cloud native technologies by 2028. “These cloud native technologies are enabling IT teams to scale, to be able to take on more customers, to grow faster, and still maintain an optimized experience for the end user,” says Joe Byrne, executive CTO at Cisco AppDynamics. “But the rapid pace of adoption means technologists have been faced with challenges from both the tech front and the people front, and they’re struggling.” A look at the challenge landscape Attack surfaces are expanding, complexity is skyrocketing, and data keeps pouring in. Seventy-eight percent of technologists said the increased volume of data from multi-cloud and hybrid environments has made manual monitoring virtually impossible. The rigorous pace of adoption and the technical issues that follow in its wake has also meant tension in the IT department as silos form, and stress and higher churn is becoming increasingly common. “The goal of the application observability report was to underscore the need for technologists to adapt to this new hybrid world,” Byrne says. “And the report was designed to provide a resource to these technologists first to let them know they’re not alone – these are common issues. But more importantly, to find solutions and next steps in managing and mitigating these issues going forward.” Managing the fragmented IT state The cloud and on-premises, hybrid nature of modern architectures means that traversing that entire ecosystem is crucial. As a result, new teams are formed to help manage the complexity – a cloud operations team to work with the network operations team, and both operations teams working separately from security. But to effectively manage what is essentially a fragmented IT estate, 85% of technologists say that observability has to be a strategic priority for the organization going forward – a way to pull this telemetry together, correlate it, and give organizations insight into the crucial backend of their business. Bridging that gap takes not only tools and technology, but people and process changes and a cultural shift. “Everybody needs to be on the same page when producing an application or an experience for the end user, who expects an optimized experience, whether that’s B2C or B2B,” Byrne says. “There’s a business KPI behind all software. Our goal is to ensure the software helps our customers achieve their goals, and thus helps the business achieve its KPIs. So, everyone must own a piece.” Why these challenges are so intractable “It’s new technology and new expectations bringing new problems,” Byrne says. “The old methodology of just validating that something is up and running isn’t good enough anymore. The idea of looking at the architecture as separate, isolated parts is not enough anymore. Now, it’s all about how is it all performing together, and what does the end result look like in terms of the experience? It’s a very different way of thinking. And it’s hard to get your head around it.” It’s also the fact that the technology is moving at a rapid pace, as are the expectations of users, but processes and culture have always changed far more slowly. As a result, 36% of technologists said these issues are already contributing to a loss of their IT talent, which hamstrings teams and puts change on the backburner, in favor of firefighting – and 46% predict that churn is just going to increase if they don’t figure out a way to break down these silos and shift to a focus on observability, versus a monitoring solution. Breaking down silos and obstacles The goal of most organizations is to build an application that’s always on, can be used on any device, whenever and wherever the customer wants to use it — but that’s what is creating these challenges for technologists. It requires new technology, it requires rapid adoption and acceleration of digital initiatives, and it leaves skill gaps, a Frankenstein management and reporting structure made up of the old and new, a lack of shared vision and objectives, and a lack of unified data and technology that’s reinforcing these silos. “IT leaders need to implement new ways of working across departments, and incentivizing and driving people to change their actions is an important one,” Byrne says, “whether it’s shared goals, shared bonuses, or increased compensation. But tool consolidation is also crucial.” Bringing in unified tools that are integrated tightly and able to work together, versus every team using a completely different tool, can not only save the organization money, but also means each team is looking at the same charts and data points, speaking the same language, using the same methodologies. “Then they start to understand how important it is to work together, how easy it can be,” he explains. “Then those silos start to get broken down.” The people-centered value of application observability Application observability serves as what should be a single source of truth. It brings together application information, network, infrastructure, performance, security and business data — and links that all together to give technologists the overall health of the application, and the ability to generate insights into the business transactions of users. For instance, in a retail application that might be a user logging in, searching, adding to cart, checking out, which together makes up the business journey. “Understanding how those are related, what technologies are involved for each of those transactions that complete that journey, is important,” Byrne says. “We found that 88% of technologists say that observability with business context is really what’s going to enable them to become more strategic and spend more time on innovation.” For example, the business data that comes from monitoring applications can be aggregated and elevated, so that you can build a dashboard showing the average sales per day, the average number of customers, conversions and other business metrics. With that data, technologists see how their optimization directly impacts the business. That could include a change in the code that auto-populates some data, or enables the task to use less data to minimize friction for purchases. And now instead of being seen this application observability as a cost center, the value of the work IT is doing is tied directly to the business. “If you release code and then see that happen in a business dashboard, the technologists can say, my code, my application, my infrastructure did that, and now they understand how they directly impact business,” Byrne says. “With the ability to link what they’re doing, how they’re doing it, the performance of their teams along with code and architectures, to a business metric, comes pride of ownership. They feel like they have a seat at the table now, a bigger voice, and can help advance the business. That’s a huge opportunity.” Implementing an application observability solution also means that engineers are spending more time writing code – what they want to be doing – and less time bug fixing or refactoring. Team members get to the root causes more quickly, are able to measure performance more easily, before code ever goes into production, which means fewer errors are sent out into the wild. “What these technologists need is that solution, like Cisco’s full-stack observability (FSO) offering – that brings a broad range of telemetry together and making it understandable and usable in terms of fixing issues and moving forward,” he says. “That’s what’s so needed.” Dig deeper: Read the full “The Age of Application Observability Report” here. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,130
2,023
"Translated Debuts Trust Attention for Unprecedented Quality in MT, Paving the Way for Accuracy in Generative AI | VentureBeat"
"https://venturebeat.com/business/translated-debuts-trust-attention-for-unprecedented-quality-in-mt-paving-the-way-for-accuracy-in-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Translated Debuts Trust Attention for Unprecedented Quality in MT, Paving the Way for Accuracy in Generative AI Share on Facebook Share on X Share on LinkedIn The latest version of ModernMT (version 7) enhances translation quality by up to 42% using Trust Attention, a novel technique developed by Translated that links the origin of data to its impact on translation accuracy. ROME–(BUSINESS WIRE)–July 27, 2023– Translated, a leading provider of AI-powered language solutions, announced the launch of ModernMT Version 7, a significant upgrade to its adaptive machine translation (MT) system. The latest version introduces Trust Attention , a novel technique inspired by the human brain’s ability to prioritize information from trusted sources, improving translation quality by up to 42% (see attached graph). This innovation sets a new industry standard, moving away from traditional MT systems that are hampered by an inability to distinguish between trustworthy data and lower quality material during the training process. This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20230727250636/en/ Percentage of cases where the new model is more accurate than the previous one. Consensus-based evaluation by 3 professional linguists. (Graphic: Business Wire) ModernMT now uses a first-of-its-kind weighting system to prioritize learning from high-quality, qualified data – meaning translations performed and reviewed by professional translators – over unverified content from the Web. As it did when introducing adaptivity, Translated looked to the human brain for inspiration in developing this new technique. Just as humans sift through multiple sources of information to identify the most trustworthy and reliable ones, ModernMT V7 similarly identifies the most valuable training data and prioritizes its learning based on that. “ ModernMT’s ability to prioritize higher quality data to improve the model is the most significant leap forward in machine translation since the introduction of dynamic adaptivity five years ago, ” said Marco Trombetti , CEO of Translated. “ This exciting innovation opens new opportunities for companies to use MT to take their global customer experience to the next level. It will also help translators increase productivity and revenue. “ The introduction of this new approach is a major step forward for companies seeking greater accuracy when translating large volumes of content or requiring a high degree of customization of the MT engine, as well as for translators integrating MT into their workflow. Today, there’s considerable discussion regarding the application of large language models (LLM) in translation. While traditional machine translation prioritizes accuracy over fluency, LLMs tend to emphasize fluency. This can sometimes result in misleading outputs due to hallucinations, where outputs aren’t grounded in the input received from training data. We believe that Translated’s Trust Attention can enhance the accuracy of generative models, reducing the chances of such errors. This could set the stage for the next era of machine translation. All Translated clients will benefit from the improved quality of the new MT model, resulting in faster project turnaround times. Translators working with Translated will experience the power of the new model through Matecat , Translated’s free, web-based, AI-powered CAT tool. Translators using an officially supported CAT tool (Matecat, memoQ, and Trados) with an active ModernMT license will also experience the power of the new model. Starting today, ModernMT V7 will replace V6 and is available via API for all 200 languages supported by ModernMT at the same price structure. New customers are invited to try the newest version of ModernMT at modernmt.com. View source version on businesswire.com: https://www.businesswire.com/news/home/20230727250636/en/ Silvio Gulizia Head of Content Mail: [email protected] Mob.: +39 393 1044785 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,131
2,023
"Enry's Island S.p.A. Becomes the First and Only Venture Builder in the World Listed on a Stock Exchange, After a €20M Round A | VentureBeat"
"https://venturebeat.com/business/enrys-island-s-p-a-becomes-the-first-and-only-venture-builder-in-the-world-listed-on-a-stock-exchange-after-a-e20m-round-a"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Enry’s Island S.p.A. Becomes the First and Only Venture Builder in the World Listed on a Stock Exchange, After a €20M Round A Share on Facebook Share on X Share on LinkedIn TREMITI ISLANDS, Italy–(BUSINESS WIRE)–July 27, 2023– Enry’s Island S.p.A. is pleased to announce that it has successfully finalised its listing on the Vienna Stock Exchange – MTF, supported by PwC Austria, becoming the first and only Venture Builder in the world listed on a Stock Exchange. This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20230727222549/en/ Luigi Valerio Rinaldi Founder & Chairman of Enry’s Island S.p.A. (Graphic: Business Wire) “We are really proud of this incredible milestone,” says Luigi Valerio Rinaldi, Chairman & CEO of Enry’s Island, “made possible thanks to the trust of internationally qualified operators, such as PwC Austria, supporting us in the Financial Due Diligence and Valuation phase, to successfully finalise the listing process on the Vienna Stock Exchange. “ The listing on the VSE consolidates the internationality of the equity story and the scale-up phase of Enry’s Island, one of the most interesting and innovative ecosystems on the global VC scene, explained by the following highlights: a distributed corporate architecture, which includes Enry’s Island and its 5 Local Companies (distributed in UK, US, Africa, Italy), with an average of 30 Companies (including portfolio startups). a unique holistic 3-layered framework, made of: Business Layer : Enry’s Model™ patented methodology, which later became the subject of economics manuals published by McGraw-Hill Software Layer : HUI.land , a Super-App Saas used by each of the companies and stakeholders of the ecosystem, through which to manage every business function and process in the entire dealflow (from origination to fundraising) Space Layer : Rinascimento5 , the first phygital distributed coworking in the world, through which the community of Enry’s Island can operate fully remotely; an incredible growth in the quantity and quality of its economic, financial and equity indicators (increase in turnover of x2 in 2022 compared to 2021, which also continues in the first half of 2023, in which the turnover of 2022 has already been reached) a large trust of qualified international investors, such as LDA Capital, a Los Angeles based fund with $11B in portfolio, which closed a €20M A Round with Enry’s Island; a success rate of its portfolio companies of 95% against the market average of 5%; one of the first operators (not only among venture builders) to build its own headquarters in the metaverse, as early as 2021 and to have organised phygital investor days in the metaverse. View source version on businesswire.com: https://www.businesswire.com/news/home/20230727222549/en/ Investor relations : [email protected] 0039 393 9774542 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,132
2,022
"Committee to Restore Nymox Shareholder Value Challenges Nymox CEO Averback to Publish 2022 FDA Refusal To File Correspondence Received May 10, 2022 | VentureBeat"
"https://venturebeat.com/business/committee-to-restore-nymox-shareholder-value-challenges-nymox-ceo-averback-to-publish-2022-fda-refusal-to-file-correspondence-received-may-10-2022"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Committee to Restore Nymox Shareholder Value Challenges Nymox CEO Averback to Publish 2022 FDA Refusal To File Correspondence Received May 10, 2022 Share on Facebook Share on X Share on LinkedIn CARSON CITY, Nev. & LONDON–(BUSINESS WIRE)–July 27, 2023– The Committee to Restore Nymox Shareholder Value, LLC (CRNSV), with a goal to recover shareholder value in NYMOX PHARMACEUTICAL CORP (“NYMX-Q”), today challenges the Company to publish a rarely issued Refusal to File (RTF) Letter from the US Food and Drug Administration (FDA) in the pursuit of full transparency and in the interests of all shareholders. Investors will then be able to determine how close the Company’s lead product is to being approved in the critical US market for the purpose of treating Benign Prostatic Hyperplasia (BPH), as well as the claimed cancer treatment, as outlined in the recent press releases which recycled old news. CRNSV believes this will highlight the failure of Nymox management to properly present this data to the FDA, which formally denied review of the New Drug Application (NDA) in a rarely issued RTF letter. It will refute recent statements regarding keyman status when the results are read. Chris Riley, one of the founding members of CRNSV, says, “These veiled inferences and questionable tactics, including intimidating threats of lawsuits for CRNSV members as they attend meetings with major shareholders and potential partners who have demonstrated capability and patiently committed to address multiple gross deficiencies created by current management, typify the actions of a NYMOX management team of one individual. The leadership is presently in disarray and unable to function correctly as they again run out of funds.” About The Committee to Restore Nymox Shareholder Value, LLC (CRNSV) CRNSV was formed by former executives of the NYMOX PHARMACEUTICAL CORP (“NYMX-Q”) with a goal to restore shareholder value in NYMOX (the Company). With a commitment to overcome the steep decline and volatility of the stock price following the catastrophic NASDAQ Delisting Decision, CRNSV has issued rebuttal letters to all Company shareholders and continues to emphasize lack of Company leadership, inability to realize the potential for valuable and promising results through a relationship with a highly respected global healthcare and specialty pharmacy solutions company; with expertise to help commercialize the Company’s benign prostatic hyperplasia (BPH) product, and Nymox’s lack of solution or plan for financial recovery of shareholder value. Headquartered in Carson City, Nevada with offices in London, CRNSV documents are available at https://www.crnsv.com/ View source version on businesswire.com: https://www.businesswire.com/news/home/20230727625352/en/ Chris Riley [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,133
2,023
"ZeroEyes uses AI and security cameras to detect guns in public and private spaces | VentureBeat"
"https://venturebeat.com/ai/zeroeyes-uses-ai-and-security-cameras-to-detect-guns-in-public-and-private-spaces"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ZeroEyes uses AI and security cameras to detect guns in public and private spaces Share on Facebook Share on X Share on LinkedIn ZeroEyes can detect guns in public spaces before shootings occur. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI has been used a lot for face detection around the world in our surveillance society. But ZeroEyes believes it can detect immediate threats by using AI to detect guns. ZeroEyes has been rolling out the gun detection video analytics service since 2022. It combines the automated detection of gun-like objects with video analysis by human experts before it sends an emergency message to the place where a shooter might be present — before shootings take place. ZeroEyes has proven its worth in cases where shooters brandish weapons — sometimes long before they start shooting and police are alerted, said Sam Alaimo, cofounder of ZeroEyes and a former Navy SEAL, in an interview with VentureBeat. “We are in full blitzscale expansion mode right now,” Alaimo said. “We started at the right time. Our name is known and trusted. We have dozens of clients speaking publicly on our behalf. After operating there for months or years to verifying our technology works the way we say it does. It’s good because every detection like this one means human lives being saved.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Gun detection in casinos ZeroEyes has U.S. Department of Homeland Security Safety Act Designation, and it was deployed across nine Muscogee Nation Gaming Enterprises properties, following a successful implementation at River Spirit Casino, the organization’s flagship location, in October 2022. “Keeping our patrons safe is of utmost importance, and the decrease in response time that ZeroEyes provides is an opportunity to save lives,” said Travis Thompson, director of compliance at the River Spirit Casino in Tulsa, Oklahoma, in a statement. “Those seconds are critical, and now we’re well-prepared and better equipped to enhance safety. Working with ZeroEyes was a unique experience, making real-time adjustments during testing, something I haven’t witnessed with any other company. I’m really proud that Indian Country is taking security and safety so seriously, and our proactive security solutions are benefiting everyone in our communities.” The River Spirit Casino spans 200,000 square feet and hosts around 10,000 guests daily. Thompson added, “ZeroEyes has elevated our overall security program to new heights, and we feel safer knowing that it has our backs. The solution operates discreetly, ensuring patrons experience a positive atmosphere at our facilities, while providing unparalleled protection against potential threats in the background without disrupting anyone’s experience. ZeroEyes was the perfect fit for us because of its seamless integration– we already had the digital cameras, they added the software, we made the connection and from there it’s been a great partnership that strengthened our security measures.” It has also been adopted by the U.S. Department of Defense, public K-12 school districts, colleges/universities, healthcare facilities, commercial property groups, manufacturing plants, Fortune 500 corporate campuses, shopping malls, big-box retail stores, and more. Internal research ZeroEyes pioneered the field of AI-based visual gun detection after finding through research that, in the majority of mass shootings, the shooter reveals their gun well ahead of the incident. In fact, research suggests that 70% to 80% percent of active shooter events involved a weapon that was visible as much as 30 minutes before the first shot was fired. For example, in the Parkland shooting, the shooter went into the stairwell and sat there for minutes with his gun fully visible, getting mentally prepared. In the 2012 movie theater shooting in Aurora, Colorado, the shooter got suited up in the parking lot beforehand. ZeroEyes has been compiling proprietary data based on an internal study of casino shootings from 2017 to 2023. Based on 100 shootings during that time, it found that 40% of the time the shooter was able to escape. This usually means that security didn’t immediately know the identity of the shooter and where the incident took place. It also means that responding officers didn’t know who they were looking for and this gave the perpetrator time to evade arrest. While a common misconception is that most shootings happen inside businesses and public venues, less than 50% of shootings at casinos happened inside the building. This means that if a gun is detected by CCTV cameras in the parking lot or perimeter of the building, the doors can be locked and business can safely continue inside the casino while security and police work to apprehend the assailant. About 45% of the shootings occur in parking lots. How it works ZeroEyes (ZE) is a proactive visual gun detection and situational awareness software platform based on computer vision and advanced machine learning AI. It is layered on existing digital security cameras at schools, businesses, gaming facilities, healthcare facilities and government offices. These are based on modern security cameras, as opposed to older analog cameras where the images were often blurry. The technology is designed to identify illegally brandished guns and instantly send images to the ZeroEyes Operation Centers (ZOCs), which are staffed by military and law enforcement veterans 24 hours a day for human verification. Once these experts verify that a gun has been identified, they dispatch alerts and provide situational awareness and actionable intelligence, including visual description, gun type and last known location of the shooter, to local staff and law enforcement as fast as three to five seconds from detection. This information is invaluable to first responders, who must act quickly with as much information as possible about a potential active shooter. A common issue ZeroEyes sees today when active shooter incidents occur is that first responders lack the situational awareness necessary to locate the shooter, contain the threat, and prevent further loss of life. As for the tech, the operations center uses off-the-shelf PCs with new centralized processing units and graphics processing units — similar to gaming rigs. It can be on premise or cloud based In most active shooter events, there are over one hundred 911 calls being made with contradictory reports, creating significant confusion, a “fog of war” and making it impossible to gauge the true nature of the threat. In training scenarios, ZeroEyes has been able to cut response time by nearly two thirds. The company’s goal is to dramatically reduce response time and save lives. ZeroEyes built its technology stack entirely in-house, in the US. Its proactive AI-based visual gun detection and situational awareness platform was developed using hundreds of thousands of proprietary images and videos, and layers advanced machine learning over existing digital security cameras. The way to describe its AI detection is that if a human was looking at a security camera and could detect a gun, then its AI would be able to pick up the same gun. Once the AI detects a potential gun, the image is flagged to the operations center, where staff assesses the frame and determines if it is a positive threat. The staff is primarily military and law enforcement veterans, often those who served in special forces units. They have been specially trained in previous lines of work to understand and identify guns, as well as remain calm and collected during stressful situations. This is the type of expertise and background required to ensure each alert is thoroughly examined within seconds. The technology is in no way intended to be a replacement for humans, but the reality is that there aren’t enough people in law enforcement or security to cover the 100 million security cameras currently deployed in the U.S. alone. Origins Alaimo met cofounders such as CEO Mike Lahiff in the SEAL teams around 2007. They did deployments together and transition out of the military in 2013. Alaimo went to college to get a master’s degree and they met again in the business world. Alaimo was in private equity and he wasn’t satisfied with the sense of purpose that he had in the military. In 2018, the Parkland shooting happened. Lahiff picked up his daughter one day from school and she had just finished doing an active shooter drill. She was upset by that, and they talked about the security cameras. Hypothetically, they would be useful after incidents, like fistfights, car thefts or mass shootings. “That’s where he had the idea. How do we take this archaic technology, the security camera? And how do we make it proactive? How do we it so that camera can see a gun before a shot is fired? And in that way, save a life? So that’s the founding story,” Alaimo said. They found that newer digital security cameras had better resolution and could discern the shape of a gun at a distance. “We built the algorithm with relatively modern cameras in the last 10 or 15 years in mind, not the older ones,” Alaimo said. Since they wanted to make sure there were no false positives, they built the recognition software in-house so that it could identify guns while staying away from facial recognition, Alaimo said. “We don’t want to store biometric data. We don’t even see live screen livestreams,” Alaimo said. “We actually just get an alert when the algorithm says ‘Hey, I think it’s a gun.'” The company was bootstrapped at the start and it has raised just shy of $30 million to date. Now it has more than 150 people, with a headquarters near Philadelphia. Detecting guns and toy guns If it’s a real alert, ZeroEyes can contact security at a school or other place directly via text message and a mobile app. The app will reveal precisely the photo and location where the firearm has been detected. If it happens to be a toy gun, ZeroEyes can identify that quickly, Alaimo said, and it won’t send an alert about that. But it will notify a place about a fake gun on a lower alert level, such as an email, as that can still be an issue for some places. “We actually had to do that today with a school in Florida. And they were deeply grateful for it,” Alaimo said. Getting off the ground The company took about two years to build its machine-learning algorithm and it took it to the market in 2020, amid the lockdowns during COVID-19. Since no one was going to school, ZeroEyes pivoted to survive and expanded to commercial and government markets. The first school to adopt it was Rancocas Valley High School in New Jersey. The school let ZeroEyes go there to test the recognition with various kinds of weaponry for about nine months and it eventually became the first client and remains a customer. To get more experience with recognizing real guns, the company built a “Hollywood grade” lab with green screens outside of its offices in Philadelphia. There, it could test many different kinds of cameras to see if they could capture images and the AI could recognize the guns in different kinds of lighting conditions and surroundings. “It’s enabled us to basically perfect this algorithm and try to assess any sort of configuration so that we’re never surprised in the real world casinos,” Alaimo said. “We have a strong advocate in the casino space with River Spirit Casino,” where the system works despite colorful surroundings and flashing lights and dark rooms. He added, “This has been a massive help to us.” Spreading across the country As for places with analog cameras, ZeroEyes rarely comes across clients who have them these days, but they are present in older infrastructure or publicly funded institutions or schools in remote locations. In those cases, ZeroEyes asks them to upgrade their cameras first. Today, ZeroEyes is in 37 states across hundreds of clients and thousands of buildings in the commercial, education and government markets. The company has detected hundreds of guns to date. “We are, without a doubt, the most widespread gun detection company on Earth,” Alaimo said. “And we have a unique advantage. This is all we do. We have turned out to be one layer in a multilayered security approach.” The company believes such focus is better than if it did face detection or license plate detection, as the quality level might suffer. If you focus on one thing, you can do it at an A+ level instead of doing multiple things at a B level, he said. In addition, the company can work with many security companies as a result of its single-minded focus on something most of those companies don’t do. As for rivals, ShotSpotter was in the market earlier with a tech that focused on locating gunshots based on sound triangulation. It rebranded itself as SoundThinking, but Alaimo does not see it as direct competition as it is based on reacting to gunshots after they’ve already been fired, rather than detecting guns beforehand. Alaimo said the evolution of technology in AI has proceeded rapidly and helped the company develop its service faster. The company is also in the midst of focusing on cloud solutions, rather than on-premises data. One analyst could likely monitor around 10,000 cameras at a time — and perhaps more — based on the alert frequency. Many of the customers are paying for the service via public funding. The pricing depends on how many cameras will stream data to the operations center. It adds up to perhaps $20 to $50 per camera stream per month. Saving lives Alaimo noted one example of a person who brought a gun to a transit platform. Bystanders were looking at their cellphones while someone on a bench pulled out a semi-automatic handgun. The company dispatched an alert and law enforcement arrived quickly and arrested the evidently intoxicated man, who had a fully loaded handgun. “We just kind of reflected on that one for a second,” Alaimo said. “You cannot quantify the mass shooting, that doesn’t happen. That kind of really brought it home.” As for the need for this, Alaimo said, “You’re 15 times more likely now to be killed in a mass shooting than dying in a fire in a building. Yet every single building in America has a fire alarm and smoke detectors. So there’s a reason for that. And I think at some point, without a doubt, every camera in the world, whether it’s our technology or not, will have gun detection software. Because we can all agree that if someone has an assault rifle in front of a K-12 school,” it’s an emergency. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,134
2,023
"ServiceNow expands platform with additional generative AI capabilities to ease enterprise productivity    | VentureBeat"
"https://venturebeat.com/ai/servicenow-expands-platform-with-additional-generative-ai-capabilities-to-ease-enterprise-productivity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ServiceNow expands platform with additional generative AI capabilities to ease enterprise productivity Share on Facebook Share on X Share on LinkedIn ServiceNow logo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Low-code enterprise automation company ServiceNow has announced the expansion of its Now platform’s generative AI capabilities, introducing case summarization and text-to-code features. The company said these advancements aim to drive speed, productivity and value for customers across various industries. According to the company, the gen AI capabilities are purpose-designed to alleviate repetitive tasks and improve productivity. The Now platform’s case summarization automatically distills crucial information from IT, HR and customer service cases, streamlining resolutions. Additionally, text-to-code converts natural language text prompts into executable code for the ServiceNow platform, providing developers with an optimized and efficient way to create code. “By infusing generative AI features into the fabric of the Now Platform and all ServiceNow workflow offerings, we aim to help customers drive business value from a single source,” Jon Sigler, VP of Now Platform at ServiceNow, told VentureBeat. “The ServiceNow large language model (LLM) was specifically developed to comprehend the Now Platform, workflows, automation use cases, processes and more, ensuring higher accuracy and performance for ServiceNow use cases and increased trust.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Powered by strategic partnerships Sigler emphasized that strategic partnerships and collaborations with tech giants such as Nvidia and Hugging Face drove the company’s exclusive ServiceNow LLM development. These alliances actively expedited the advancement and integration of enterprise gen AI capabilities, merging cutting-edge research, advanced AI infrastructure and expert domain knowledge. “Our strategy is to accelerate the development and deployment of highly effective and specialized language models to unleash the true potential of generative AI for the enterprise,” Sigler told VentureBeat. “Through our work with Nvidia, we’re developing powerful, enterprise-grade generative AI capabilities that can transform business processes with faster, more intelligent workflow automation. We also continue to partner with research organizations like Hugging Face to establish new, responsible AI practices to train and share large language models.” In collaboration with partners Nvidia and Accenture, ServiceNow also recently unveiled the AI Lighthouse program. This initiative is tailored to enable customer companies to develop their gen AI applications swiftly, bypassing the need for protracted assessment and procurement processes. Through the AI Lighthouse program, customers will access ServiceNow’s enterprise automation platform and engine, as well as Nvidia’s AI supercomputing and software, complemented by Accenture’s consulting and deployment services. Leveraging generative AI to drive enterprise productivity ServiceNow said that case summarization enhances efficiency and expedites customer outcomes by distilling vital information from case details, prior interactions, actions and resolutions. This automation facilitates swift hand-offs between internal teams, boosts productivity and delivers streamlined resolutions for customers and employees. “We are tapping into the power of generative AI to analyze and provide case summaries in seconds,” Sigler told VentureBeat. “Through our initial use of our generative AI tools internally, we’re seeing agents spend 30% less time getting up to speed on a case and 42% less time writing case resolution notes. The new offering will remove the tedium from the reporting process and speed up issue resolution to boost agent productivity and provide better customer experience.” Likewise, text-to-code aims to empower developers with a time-efficient approach to creating code for routine commands. By composing plain, natural language text descriptions, developers harness gen AI to receive high-quality code suggestions and complete code, thereby improving code hygiene, accuracy and quality. Streamlining coding The company asserts that this integration of AI technology will streamline the enterprise coding process, making it more accessible and effective for developers. “Text-to-code boosts productivity for both pro and low-code developers, as it mitigates repetitive and time-consuming work, especially given developers tend to create the same code for common commands,” said Sigler. “Developers can write plain text descriptions of the type of code they want, and generative AI within the Now Platform will convert the text into high-quality code suggestions and share it in-line for developers to review and implement.” The new functionalities rely on a proprietary ServiceNow LLM specifically developed to comprehend the Now Platform, workflows, automation use cases and processes. “Unlike many generic large language models, StarCoder is carefully trained and fine-tuned using a vast amount of proprietary enterprise data from ServiceNow,” Sigler explained. “We believe that it will increase the productivity of every user who uses the ServiceNow platform across the organization.” The LLM, derived from the 15 billion parameter StarCoder LLM, originated from the ServiceNow co-led, open BigCode initiative. It underwent training and tuning using Nvidia accelerated computing, including Nvidia DGX Cloud. What’s next for ServiceNow? Sigler stated that the company is actively exploring how gen AI can enhance its sales teams’ efficiency in onboarding and promptly addressing product-related inquiries. Additionally, it aims to accelerate employee growth and career development by implementing gen AI. Sigler also outlined ServiceNow’s future strategy, which involves integrating gen AI throughout the Now Platform. He said the company aims to enable its customers to swiftly operate with intelligence at scale, fostering gen AI-powered innovation across all aspects of their businesses. “We believe that generative AI will play a transformative role in enabling intelligent automation, powerful problem-solving and personalized experiences for our customers, and we are intent on providing that for them in the most efficient way possible,” Sigler added. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,135
2,023
"MIT CSAIL unveils PhotoGuard, an AI defense against unauthorized image manipulation | VentureBeat"
"https://venturebeat.com/ai/mit-csail-unveils-photoguard-an-ai-defense-against-unauthorized-image-manipulation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MIT CSAIL unveils PhotoGuard, an AI defense against unauthorized image manipulation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In recent years, large diffusion models such as DALL-E 2 and Stable Diffusion have gained recognition for their capacity to generate high-quality, photorealistic images and their ability to perform various image synthesis and editing tasks. But concerns are arising about the potential misuse of user-friendly generative AI models, which can enable the creation of inappropriate or harmful digital content. For example, malicious actors might exploit publicly shared photos of individuals by utilizing an off-the-shelf diffusion model to edit them with harmful intent. To tackle the mounting challenges surrounding unauthorized image manipulation, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have introduced “ PhotoGuard ,” an AI tool designed to combat advanced gen AI models like DALL-E and Midjourney. Fortifying images before uploading In the research paper “ Raising the Cost of Malicious AI-Powered Image Editing ,” the researchers claim that PhotoGuard can detect imperceptible “perturbations” (disturbance or irregularity) in pixel values, which are invisible to the human eye but detectable by computer models. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Our tool aims to ‘fortify’ images before uploading to the internet, ensuring resistance against AI-powered manipulation attempts,” Hadi Salman, MIT CSAIL doctorate student and paper lead author, told VentureBeat. “In our proof-of-concept paper, we focus on manipulation using the most popular class of AI models currently employed for image alteration. This resilience is established by incorporating subtly crafted, imperceptible perturbations to the pixels of the image to be protected. These perturbations are crafted to disrupt the functioning of the AI model driving the attempted manipulation.” According to MIT CSAIL researchers, the AI employs two distinct “attack” methods to create perturbations: encoder and diffusion. The “encoder” attack focuses on the image’s latent representation within the AI model, causing the model to perceive the image as random and rendering image manipulation nearly impossible. Likewise, the “diffusion” attack is a more sophisticated approach and involves determining a target image and optimizing perturbations to make the generated image closely resemble the target. Adversarial perturbations Salman explained that the key mechanism employed in its AI is ‘adversarial perturbations.’ “Such perturbations are imperceptible modifications of the pixels of the image that have proven to be exceptionally effective in manipulating the behavior of machine learning models,” he said. “PhotoGuard uses these perturbations to manipulate the AI model processing the protected image into producing unrealistic or nonsensical edits.” A team of MIT CSAIL graduate students and lead authors — including Alaa Khaddaj, Guillaume Leclerc and Andrew Ilyas —contributed to the research paper alongside Salman. The work was also presented at the International Conference on Machine Learning in July and was partially supported by National Science Foundation grants at Open Philanthropy and Defense Advanced Research Projects Agency. Using AI as a defense against AI-based image manipulation Salman said that although AI-powered generative models such as DALL-E and Midjourney have gained prominence due to their capability to create hyper-realistic images from simple text descriptions, the growing risks of misuse have also become evident. These models enable users to generate highly detailed and realistic images, opening up possibilities for innocent and malicious applications. Salman warned that fraudulent image manipulation can influence market trends and public sentiment in addition to posing risks to personal images. Inappropriately altered pictures can be exploited for blackmail, leading to substantial financial implications on a larger scale. Although watermarking has shown promise as a solution, Salman emphasized the necessity for a preemptive measure to proactively prevent misuse remains critical. “At a high level, one can think of this approach as an ‘immunization’ that lowers the risk of these images being maliciously manipulated using AI — one that can be considered a complementary strategy to detection or watermarking techniques,” Salman explained. “Importantly, the latter techniques are designed to identify falsified images once they have been already created. However, PhotoGuard aims to prevent such alteration to begin with.” Changes imperceptible to humans PhotoGuard alters selected pixels in an image to enable the AI’s ability to comprehend the image, he explained. AI models perceive images as complex mathematical data points representing each pixel’s color and position. By introducing imperceptible changes to this mathematical representation, PhotoGuard ensures the image remains visually unaltered to human observers while protecting it from unauthorized manipulation by AI models. The “encoder” attack method introduces these artifacts by targeting the algorithmic model’s latent representation of the target image — the complex mathematical description of every pixel’s position and color in the image. As a result, the AI is essentially prevented from understanding the content. On the other hand, the more advanced and computationally intensive “diffusion” attack method disguises an image as different in the eyes of the AI. It identifies a target image and optimizes its perturbations to resemble the target. Consequently, any edits the AI attempts to apply to these “immunized” images will be mistakenly applied to the fake “target” images, generating unrealistic-looking images. “It aims to deceive the entire editing process, ensuring that the final edit diverges significantly from the intended outcome,” said Salman. “By exploiting the diffusion model’s behavior, this attack leads to edits that may be markedly different and potentially nonsensical compared to the user’s intended changes.” Simplifying diffusion attack with fewer steps The MIT CSAIL research team discovered that simplifying the diffusion attack with fewer steps enhances its practicality, even though it remains computationally intensive. Furthermore, the team said it is integrating additional robust perturbations to bolster the AI model’s protection against common image manipulations. Although researchers acknowledge PhotoGuard’s promise, they also cautioned that it is not a foolproof solution. Malicious individuals could attempt to reverse-engineer protective measures by applying noise, cropping or rotating the image. As a research proof-of-concept demo, the AI model is not currently ready for deployment, and the research team advises against using it to immunize photos at this stage. “Making PhotoGuard a fully effective and robust tool would require developing versions of our AI model tailored to specific gen AI models that are present now and would emerge in the future,” said Salman. “That, of course, would require the cooperation of developers of these models, and securing such a broad cooperation might require some policy action.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,136
2,023
"Looking for OpenAI skills? Upwork wants to help | VentureBeat"
"https://venturebeat.com/ai/looking-for-openai-skills-upwork-wants-your-help"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Looking for OpenAI skills? Upwork wants to help Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Ever since OpenAI’s ChatGPT dramatically entered the AI scene in November 2022, there has been an explosion of interest and demand in AI skills — and particularly OpenAI skills. A new partnership announced today (July 31) between OpenAI and Upwork aims to help meet the demand. The OpenAI Experts on Upwork services is a way for OpenAI users to get access to skilled professionals to help with AI projects. The two organizations worked together to design the service to identify and help validate the right professionals to help enterprises get the AI skills that are needed. Among the OpenAI specific skills that the program will help to source for organizations are developers with experience working GPT-4 and Whisper, as well as AI model integration. The new offering is an expansion of Upwork’s AI Services hub launched on July 11. The AI Services Hub is a general service to help organizations find AI skills and benefit from technologies from OpenAI to help users find what they need. “This partnership is about connecting talent with relevant OpenAI skills to businesses in need of those skills to get work done more efficiently,” Dave Bottoms, GM and VP of product for the Upwork Marketplace, told VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Demand for AI skills growing The OpenAI Experts on Upwork launch is a response to demand that the two vendors are seeing in the market for AI talent. Bottoms said that Upwork has seen massive demand for AI talent: It has been exponentially since Q4 last year. He noted that in the first half of 2023, AI was the fastest-growing category overall on the Upwork platform (in terms of total number of individuals hired). “We’ve seen widespread adoption of generative AI on the Upwork platform across the board,” Bottoms said. “Gen AI job posts on our platform are up more than 1,000% and related searches are up more than 1500% in Q2 2023 when compared to Q4 2022.” What the Upwork OpenAI partnership is all about The need for AI skill is something that multiple talent platforms have seen in 2023. Earlier this year, freelance marketplace Fiverr added a dedicated AI marketplace to its service as searches spiked for AI skills. With its direct OpenAI partnership, Upwork is aiming to provide a vetted offering of skilled professionals. Bottoms said that the partnership is about connecting talent with relevant OpenAI skills to businesses in need of those skills, to get work done more efficiently. Bottoms said that Upwork partnered with OpenAI to identify the most common use cases for OpenAI customers — like building applications powered by large language models (LLMs), fine-tuning models and developing chatbots with responsible AI in mind. “This partnership is about connecting talent with the important skills required for success on these types of projects with businesses,” he said. “This is not about providing help desk support for OpenAI customers.” What AI skills are organizations looking for? OpenAI provides a powerful platform to build upon, and presents an exciting new horizon for many companies starting to explore its potential — but as with any emerging field it requires some specific skill sets. On the technical side, Bottoms said these might include expertise in programming, machine learning (ML) and data handling. But there’s also a need for data quality skills and privacy expertise depending on what organizations are trying to build. “Until now, talent with many of these skills were not readily available to the majority of companies,” said Bottoms. “This partnership will seamlessly connect that talent with businesses that have those needs to achieve their most ambitious AI initiatives, accelerating their ability to innovate in the process.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,137
2,023
"How generative AI code assistants could revolutionize developer experience | VentureBeat"
"https://venturebeat.com/ai/how-generative-ai-code-assistants-could-revolutionize-developer-experience"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How generative AI code assistants could revolutionize developer experience Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Developers are experiencing an evolution in how they complete work. With the advent of generative AI , a race in AI-augmented programming has begun. Several technology providers are introducing new and improved tools that provide an immersive AI coding experience and help developers scale productivity. Gen AI code generation has the potential to revolutionize software development workflow and the developer experience. Generative assistants can augment the work of developers by helping with tasks such as generating boilerplate code, refactoring legacy code, writing test cases, checking for vulnerabilities and much more. Gartner predicts that by 2025, 80% of the product development life cycle will make use of gen AI code generation, with developers acting as validators and orchestrators of back-end and front-end components and integrations. For enterprises, a superior developer experience is essential to attract and retain top engineering talent. It also ensures development teams are productive and engaged with their work, helping accelerate innovation. In a recent Gartner survey , 58% of software engineering leaders reported that developer experience is “very” or “extremely” critical to the C-suite at their organizations. Technology vendors will lead the charge in both experimenting with AI code assistants for building software faster, as well as integrating them as part of the experience they want to deliver for their customers — coders and citizen developers. Therefore, business leaders at these organizations must understand the potential of AI coding assistants and plan for how these solutions will impact outcomes across the organization. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Developers will become orchestrators of software development AI code assistants will provide two important benefits for tech companies, the first being productivity. Software engineering teams will be able to scale their productivity, and therefore their ability to iterate and improve features at a faster pace. In the near future, developers will increasingly act as orchestrators of coding tasks, with code assistants completing a large majority of the work. The second benefit will be a faster response to competitive pressure. AI code assistants will considerably lower the barriers to entry in software development, which means new entrants in the competitive space will add to the pressure on innovation pace and margin of existing players. Development teams that do not adopt code assistants within their software life cycle will be left behind in terms of their ability to execute and to deliver against the fast-moving competitive landscape. AI code assistants will augment developer personas Many technology vendor organizations must also consider the impact of gen AI code assistants on their product offerings. For enterprises delivering software targeted to developers, product teams must capitalize on changing expectations around developer experience. Augmented integrated development environments (IDEs) with code assistants will replace basic code editors, becoming table stakes in the short term. Targeted developer personas will expect a superior experience in the applications and platforms they use. If the platform offers neither native nor integration options with vetted AI code assistant services, developers will either choose competitors that offer that option, or they will take their development efforts outside of the designated platforms offered. Business leaders at enterprises looking to provide a competitive experience for software targeting developers must work with product teams to integrate augmented IDE services into their offerings. Generative low-code and no-code applications will accelerate citizen technologist personas Finally, business leaders must also consider how gen AI code assistants can impact development activities outside of IT. Gartner predicts that by 2025, 80% of custom technology solutions within enterprises will be created by those who are not full-time technical professionals, up from 20% in 2020. Advancing into generative processes and workflows will be a natural progression from task-based code generation. Process metadata will be the baseline for training and guiding generative processes that orchestrate blocks of generative code tasks. This application of gen AI will fuel the productivity wave for low-code and no-code citizen developers. They will be able to use text-to-process generative assistants that produce processes and workflows with multiple code tasks. This will enable citizen developers to prompt generative assistants to design and build full applications that combine both front-end and back-end services. Examples of voice-to-text-to-process are already emerging for building basic functional web applications and will continue to progress in more complex tasks. Employing gen AI coding assistants to support the developer experience is just the beginning. The low-code and no-code builder experience will scale the value of gen AI coding assistants, enabling organizations to drive productivity and outcomes beyond the development team. Business leaders should support citizen technologists within their organizations in employing gen coding solutions to build applications and speed up processes. How to begin integrating AI code assistants in the enterprise To attract and retain critical software engineering talent, stay ahead of competition and drive digital transformation through citizen technologists, enterprises must embrace AI code assistant offerings within all aspects of the software development workflow. This will require business leaders to be engaged in making the right vendor and talent management decisions, as well as taking the proper risk mitigation measures. From a vendor management perspective, gen AI coding assistants are evolving rapidly, with commercial offerings currently more mature than open source versions. Vendor offerings use a range of different models, meaning that developers may prefer different products. When evaluating code assistant offerings, focus on vendors who make the exploratory experience for developers easy and accessible. Look for vendors that provide enterprise-grade services with a focus on security and privacy as well as continuous learning and feedback loop of the code bases into the generative models powering the tools. Business leaders can begin by working with IT and software engineering leadership to pilot solutions with an eye toward fast rollouts to maximize developer productivity. Make it easy for willing developers to use approved products and encourage the sharing of best practices across engineering teams. Best practices should span not only appropriate tools for certain tasks, for prompt engineering, with documented examples for improving the outcomes from code generation. Risks associated with gen AI tools While the responsibility for mitigating the risk of using AI code assistants is shared by the vendor and the buyer enterprises, organizations using gen AI tools for software development should actively gain awareness of risks associated with these tools. Stay vigilant across the evaluation, activation and full operationalization of AI code assistants. Potential risks to watch for include intellectual property risks, software bugs and security vulnerabilities, impacts on code quality and the overall pace of change in the vendor space, among others. AI coding assistants will enhance developer productivity, but they will not replace developers in the near to medium term. However, the prospects for the long term are yet to be determined. Technology leaders must act now to evolve their development teams to embrace the power of these offerings while planning for the long-term evolution of the software engineering experience. Radu Miclaus is a senior director analyst at Gartner, Inc. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,138
2,023
"How AI is fundamentally altering the business landscape | VentureBeat"
"https://venturebeat.com/ai/how-ai-is-fundamentally-altering-the-business-landscape"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How AI is fundamentally altering the business landscape Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Over the past year, we’ve witnessed dramatic strides in AI development and huge shifts in public perceptions of the technology. Chatbots like OpenAI’s ChatGPT and LLMs like GPT-4 have demonstrated remarkable abilities to communicate fluently and perform at or near the highest level on a broad range of cognitive assessments. Companies that are integral to the AI ecosystem (like Nvidia) have seen their market caps soar. Talk of an AI arms race among tech giants like Google and Microsoft is ubiquitous. Despite all the excitement surrounding AI, there has been no shortage of consternation — from concerns about job displacement, the spread of disinformation, and AI-powered cyberattacks all the way to fears of existential risk. Although it’s essential to test and deploy AI responsibly , it’s unlikely that we will see significant regulatory changes within the next year (which will widen the gap between leaders and followers in the field). Large, data-rich AI leaders will likely see massive benefits while competitors that fall behind on the technology — or companies that provide products and services that are under threat from AI — are at risk of losing substantial value. There will be winners and losers in the AI race, but AI pessimists are discounting the creativity and productivity that the technology will unleash. Yes, job losses are inevitable, but so are job gains. The most successful companies won’t fight the tide of change — they will figure out how to take part in one of the greatest technological revolutions we have ever witnessed. Innovation will counteract dislocation There’s no doubt that AI will replace many roles that exist today — data entry clerks, content creators, paralegals, customer service agents and millions of other workers may discover that their careers are about to take an unexpected turn. Accenture expects 40% of all working hours to be affected by LLMs alone, as “language tasks account for 62% of the total time employees work.” The World Economic Forum’s 2023 Future of Jobs Report projects that the proportion of tasks done by machines will jump from 34% to 43% by 2027. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That said, it’s always wise to bet on human creativity and resilience. As some roles become redundant, there will be increased demand for AI auditors and ethicists , prompt engineers , information security analysts, and so on. There will also be surging demand for educational resources focused on AI. PwC reports that a remarkable 74% of workers say they’re “ready to learn a new skill or completely retrain to keep themselves employable” — an encouraging sign that employees recognize the importance of adapting to new technological and economic realities. Perhaps this is why 73% of American workers believe technology will improve their job prospects. Companies should take advantage of these sentiments by focusing on talent mobility and professional development, which will simultaneously prepare their workforces for the AI era and improve retention in a stubbornly tight labor market. Beyond internal training, we’re seeing the emergence of third-party educational services focused on AI, data science , cybersecurity and many other forward-looking subjects – a trend that will likely pick up momentum in the coming years. Amid all the dire headlines about AI-fueled job losses, it’s important to remember how adaptable human beings can be. Managing AI risk will be a core priority On top of the economic shocks that will be caused by AI, the technology poses many other dangers that companies and consumers will need to account for in the coming years. AI-powered cyberattacks, problems with bias and transparency, copyright infringement, and the large-scale production of inaccurate information are all risks that are becoming increasingly urgent. The ways we manage these risks will have sweeping implications for the deployment and adoption of AI in the coming years. Take the potential role of AI in cyberattacks. According to Verizon’s 2023 Data Breach Investigations Report , almost three-quarters of data breaches involve a human element, which is why cybercriminals often rely on social engineering attacks such as phishing. LLMs are capable of producing limitless quantities of coherent and compelling text in an instant, which could give cybercriminals a powerful tool for scaling up phishing attacks (these attacks are dependent upon convincing victims to click on malicious content with realistic-sounding text). Check Point Research has already identified “attempts by Russian cybercriminals to bypass OpenAI’s restrictions.” Companies will increase their cybersecurity investments to keep pace with these developments, and we will likely see major AI-enabled cyberattacks in the near future. It will be necessary to update approaches to cybersecurity training to account for the threat posed by AI. Phishing attempts, for instance, will be harder to spot because cybercriminals will use LLMs to produce convincing (and less error-filled) text. The companies in the best position to succeed during the AI revolution are the ones that are considering the risks now and updating their compliance protocols, HR policies and cybersecurity platforms to account for the dangers of AI while leveraging its benefits. AI will fundamentally transform the business environment ChatGPT soared to 100 million monthly active users in just two months, which makes it the fastest-growing consumer application of all time. While large tech companies with access to enormous amounts of data and leading minds in the field will have significant first-mover advantages, many startups will develop innovative implementations for AI in the near future. The economic impact of AI will go far beyond the development of the technology itself. For example, the fusion of AI and robotics — as well as new collaborations between mechanical, electrical and software engineers — will dramatically shrink innovation cycle times, error rates and costs. Over the next year, AI-led disruption will swiftly pick up momentum: Workforces will shift, there will be drastic fluctuations in market share and valuations, and slow AI adopters will lose traction quickly. There will also be many false starts — while some companies will generate staggering returns, others will fall for misdirected hype and run into dead ends. The most successful startups will find a way to capitalize on network effects around data acquisition and partnerships with first movers. It’s impossible to know exactly what the business landscape will look like as AI rapidly improves and proliferates. But one thing is certain: Forward-thinking companies are right to focus on AI now — they just have to be cognizant of the risks along with the potential rewards. Mark Sherman is managing partner at Telstra Ventures. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,139
2,023
"Generative AI is quickly infiltrating organizations, McKinsey reports | VentureBeat"
"https://venturebeat.com/ai/generative-ai-is-quickly-infiltrating-organizations-mckinsey-reports"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Generative AI is quickly infiltrating organizations, McKinsey reports Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. McKinsey and Company is no stranger to generative artificial intelligence (gen AI): around half of the global consulting giant’s employees were said to be using the technology as of earlier this summer. But it’s not the only org to see a rapid uptake of gen AI. Indeed, a new annual report by McKinsey’s AI arm QuantumBlack finds that “use of gen AI is already widespread.” McKinsey reached this conclusion by conducting an online survey of 1,684 participants across various regions, industries and company sizes between April 11 and 21, 2023. The majority (79%) of respondents reported “at least some exposure to gen AI, either for work or outside of work,” while 22% said they were using it regularly for their work. Those findings echo VentureBeat’s own informal survey conducted ahead of our VB Transform conference in San Francisco last month, which found that more than 70% of companies are already experimenting with gen AI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Who is using gen AI and where? Ok, so we know companies and individuals are getting their hands on gen AI, but who is using it the most, and for what? McKinsey’s new report provides some insights worth calling out. So far, North America-based respondents lead the globe in terms of gen AI adoption for work, with 28% of them using the tech in their jobs and outside of work, compared to 24% of European respondents and 22% of Asia-Pacific respondents (Greater China was just 19%). This is perhaps expected, given the gen AI craze kicked off in the U.S. in November 2022 with OpenAI’s launch of ChatGPT. Similarly, the industries that have most rapidly embraced the technology for work and/or outside of work so far are “technology, media and telecom,” at 33%, followed by “financial services” and “business, legal and professional services,” at 24% and 23%, respectively. Again, this is not too surprising: The tech industry, which originated gen AI products, has been leading the adoption curve. Interestingly, though, when it comes to which job titles use gen AI the most for work and/or outside of it, both C-suite and senior managers clocked in at 24% regular usage for work and outside of work, combined. Midlevel managers were close behind at 23%, although they were more likely to have had no exposure, as well (19%). What gen AI is being used for The business functions most commonly harnessing these newer tools mirror those where AI use is most prevalent overall. These include marketing and sales, product and service development and service operations such as customer care and back-office support. In fact, the single largest category of functions where gen AI was being used as of April 2023 was marketing and sales, at 14%, followed by product/service development at 13%. Very low on the list were supply chain management at 3% and manufacturing at just 2%. While these areas may prove more challenging and time-consuming for AI adoption and have some physical constraints that make them more resistant to it, supply chain management in particular would seem to be a ripe area for new gen AI products and services to take hold — as a lot of it does involve planning, analyzing market conditions and providing insights based on vast volumes of data, all of which gen AI excels at. In terms of what people are using gen AI to do, specifically, almost all of the capabilities so far revolve around creating, summarizing and analyzing documents. However, trend forecasting is a close second. More than a fleeting trend The report also revealed that gen AI is not just a fleeting trend but a strategic focus for many organizations. Nearly half (40%) of those reporting AI adoption indicated that their companies plan to ramp up their overall AI investments, thanks to gen AI. Furthermore, the technology has already made its way onto the board’s agenda for 28% of these organizations. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,140
2,023
"Fragmented truth: How AI is distorting and challenging our reality | VentureBeat"
"https://venturebeat.com/ai/fragmented-truth-how-ai-is-distorting-and-challenging-our-reality"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Fragmented truth: How AI is distorting and challenging our reality Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. When Open AI first released ChatGPT , it seemed to me like an oracle. Trained on vast swaths of data, loosely representing the sum of human interests and knowledge available online, this statistical prediction machine might, I thought, serve as a single source of truth. As a society, we arguably have not had that since Walter Cronkite every evening told the American public: “That’s the way it is” — and most believed him. What a boon a reliable source of truth would be in an era of polarization, misinformation and the erosion of truth and trust in society. Unfortunately, this prospect was quickly dashed when the weaknesses of this technology quickly appeared, starting with its propensity to hallucinate answers. It soon became clear that as impressive as the outputs appeared, they generated information based simply on patterns in the data they had been trained on and not on any objective truth. AI guardrails in place, but not everyone approves But not only that. More issues appeared as ChatGPT was soon followed by a plethora of other chatbots from Microsoft, Google, Tencent, Baidu, Snap, SK Telecom, Alibaba, Databricks, Anthropic, Stability Labs, Meta and others. Remember Sydney ? What’s more, these various chatbots all provided substantially different results to the same prompt. The variance depends on the model, the training data, and whatever guardrails the model was provided. These guardrails are meant to hopefully prevent these systems from perpetuating biases inherent in the training data, generating disinformation and hate speech and other toxic material. Nevertheless, soon after the launch of ChatGPT, it was apparent that not everyone approved of the guardrails provided by OpenAI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For example, conservatives complained that answers from the bot betrayed a distinctly liberal bias. This prompted Elon Musk to declare he would build a chatbot that is less restrictive and politically correct than ChatGPT. With his recent announcement of xAI, he will likely do exactly that. Anthropic, Meta approaches Anthropic took a somewhat different approach. They implemented a “constitution” for their Claude (and now Claude 2) chatbots. As reported in VentureBeat, the constitution outlines a set of values and principles that Claude must follow when interacting with users, including being helpful, harmless and honest. According to a blog post from the company, Claude’s constitution includes ideas from the U.N. Declaration of Human Rights, as well as other principles included to capture non-western perspectives. Perhaps everyone could agree with those. Meta also recently released their LLaMA 2 large language model (LLM). In addition to apparently being a capable model, it is noteworthy for being made available as open source, meaning that anyone can download and use it for free and for their own purposes. There are other open-source generative AI models available with few guardrail restrictions. Using one of these models makes the idea of guardrails and constitutions somewhat quaint. Fractured truth, fragmented society Although perhaps all the efforts to eliminate potential harms from LLMs are moot. New research reported by the New York Times revealed a prompting technique that effectively breaks the guardrails of any of these models, whether closed-source or open-source. Fortune reported that this method had a near 100% success rate against Vicuna, an open-source chatbot built on top of Meta’s original LlaMA. This means that anyone who wants to get detailed instructions for how to make bioweapons or to defraud consumers would be able to obtain this from the various LLMs. While developers could counter some of these attempts, the researchers say there is no known way of preventing all attacks of this kind. Beyond the obvious safety implications of this research, there is a growing cacophony of disparate results from multiple models, even when responding to the same prompt. A fragmented AI universe, like our fragmented social media and news universe, is bad for truth and destructive for trust. We are facing a chatbot-infused future that will add to the noise and chaos. The fragmentation of truth and society has far-reaching implications not only for text-based information but also for the rapidly evolving world of digital human representations. AI: The rise of digital humans Today chatbots based on LLMs share information as text. As these models increasingly become multimodal — meaning they could generate images, video and audio — their application and effectiveness will only increase. One possible use case for multimodal application can be seen in “digital humans,” which are entirely synthetic creations. A recent Harvard Business Review story described the technologies that make digital humans possible: “Rapid progress in computer graphics, coupled with advances in artificial intelligence (AI), is now putting humanlike faces on chatbots and other computer-based interfaces.” They have high-end features that accurately replicate the appearance of a real human. According to Kuk Jiang, cofounder of Series D startup company ZEGOCLOUD, digital humans are “highly detailed and realistic human models that can overcome the limitations of realism and sophistication.” He adds that these digital humans can interact with real humans in natural and intuitive ways and “can efficiently assist and support virtual customer service, healthcare and remote education scenarios.” Digital human newscasters One additional emerging use case is the newscaster. Early implementations are already underway. Kuwait News has started using a digital human newscaster named “Fedha” a popular Kuwaiti name. “She” introduces herself: “I’m Fedha. What kind of news do you prefer? Let’s hear your opinions.“ By asking, Fedha introduces the possibility of newsfeeds customized to individual interests. China’s People’s Daily is similarly experimenting with AI-powered newscasters. Currently, startup company Channel 1 is planning to use gen AI to create a new type of video news channel, what The Hollywood Reporter described as an AI-generated CNN. As reported, Channel 1 will launch this year with a 30-minute weekly show with scripts developed using LLMs. Their stated ambition is to produce newscasts customized for every user. The article notes: “There are even liberal and conservative hosts who can deliver the news filtered through a more specific point of view.” Can you tell the difference? Channel 1 cofounder Scott Zabielski acknowledged that, at present, digital human newscasters do not appear as real humans would. He adds that it will take a while, perhaps up to 3 years, for the technology to be seamless. “It is going to get to a point where you absolutely will not be able to tell the difference between watching AI and watching a human being.” Why might this be concerning? A study reported last year in Scientific American found “not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces,” according to study co-author Hany Farid, a professor at the University of California, Berkeley. “The result raises concerns that ‘these faces could be highly effective when used for nefarious purposes.’” There is nothing to suggest that Channel 1 will use the convincing power of personalized news videos and synthetic faces for nefarious purposes. That said, technology is advancing to the point where others who are less scrupulous might do so. As a society, we are already concerned that what we read could be disinformation, what we hear on the phone could be a cloned voice and the pictures we look at could be faked. Soon video — even that which purports to be the evening news — could contain messages designed less to inform or educate but to manipulate opinions more effectively. Truth and trust have been under attack for quite some time, and this development suggests the trend will continue. We are a long way from the evening news with Walter Cronkite. Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,141
2,023
"Dell, Nvidia join forces for next-gen generative AI solutions | VentureBeat"
"https://venturebeat.com/ai/dell-and-nvidia-join-forces-for-next-gen-generative-ai-solutions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dell, Nvidia join forces for next-gen generative AI solutions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Dell Technologies is looking to help customers navigate the generative AI landscape with a new portfolio of solutions announced Monday. The new Dell Generative AI solutions portfolio expands on an initial announcement the company made in May under the name Project Helix , which involves a deep integration with Nvidia. As part of the Dell Generative AI portfolio, the company is announcing new validated designs with Nvidia for helping enterprises deploy AI workloads into production on-premises. The second part of the update is a set of professional services to help guide enterprises as they figure out how and where generative AI can be a business benefit. The third part of the update is new Dell Precision workstations that are targeted at data scientists, with the right mix of capabilities to help them build generative AI-powered applications. In a recent survey by Dell with global decision-makers, 91% of the respondents said they were using generative AI in their lives in some capacity already, and 71% said they were using it for work purposes, according to Varun Chhabra, senior vice president of Dell Infrastructure Solution Group (ISG). Chhabra shared the findings during a press conference. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “It’s very clear in our conversations with customers that there is a unique sense of urgency that organizations of all sizes, verticals and geographies are facing right now in terms of adopting and applying generative AI for the benefit of businesses,” Chhabra said. Project Helix gets real as Nvidia, Dell put hardware on the table The infrastructure and services of the Dell Generative AI solutions portfolio are being co-delivered by Dell with Nvidia. Chhabra said that the new Dell Validated Design for Generative AI with Nvidia offering is the installation of Project Helix which was announced at Dell Technologies World in May. The first release is not a general offering for all AI, it is focused on inferencing use cases. Conversations with enterprise customers made the urgent needs for AI clear, Chhabra said. He noted that organizations are looking to understand how they can take existing generative AI models that they’ve been either building from scratch or tuning with their own data, scaling them and putting them to work for their businesses. That’s why the Dell/Nvidia design is focused on inferencing. The Nvidia side of the offering includes Nvidia’s Nemo framework, which has a number of data models for different use cases and industries. The Nvidia Triton Inferencing Server is another essential part of the approach, helping to provide inferencing capabilities for existing AI models. Nvidia GPUs are also part of the hardware infrastructure that integrates Dell servers and infrastructure management capabilities. There are multiple use cases that Dell sees for generative AI services including software development, content creation, chatbots and virtual assistants. “With the Dell Validated Design for Generative AI with Nvidia focused on inferencing, customers can start with a prebuilt foundation, instead of investing time and money by trying to do it themselves, testing different infrastructure and having to learn what configurations they need to use,” Chhabra said. “This is really reducing their time to market.” Looking beyond hardware and software to enable generative AI With the rapid rise of generative AI, there is also a real need for education and professional services to help organizations adopt the technology. Chhabra said that enterprises are often at very different stages of understanding and adoption of generative AI. At the earliest stages, there is a need to define a vision for generative AI usage that aligns with an organization’s operations. Dell now has professional services to help with that early stage, which can include workshops for internal stakeholders to define a generative AI vision and identify where they want to start. For organizations that already have a strategy for generative AI that is aligned with business objectives, the next step can often be figuring out how actually to build and implement services. To that end, Dell has professional services to help implement and deploy validated designs for inferencing. And finally, for organizations that are further along, Dell has a service to help with scaling up generative AI to meet growing demands. Looking forward, Chhabra emphasized the new solutions are just getting started with a focus on inferencing and there will be more to come from Dell in the future. “This is certainly just the start of what we believe will be a long journey in helping our customers with generative AI solutions,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,142
2,023
"DeepMind unveils RT-2, a new AI that makes robots smarter | VentureBeat"
"https://venturebeat.com/ai/deepmind-unveils-rt-2-a-new-ai-that-makes-robots-smarter"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind unveils RT-2, a new AI that makes robots smarter Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google’s DeepMind has announced Robotics Transformer 2 (RT-2), a first-of-its-kind vision-language-action (VLA) model that can enable robots to perform novel tasks without specific training. Just like how language models learn general ideas and concepts from web-scale data, RT-2 uses text and images from the web to understand different real-world concepts and translate that knowledge into generalized instructions for robotic actions. When improved, this technology can lead to context-aware, adaptable robots that could perform different tasks in different situations and environments — with far less training than currently required. What makes DeepMind’s RT-2 unique? Back in 2022, DeepMind debuted RT-1 , a multi-task model that trained on 130,000 demonstrations and enabled Everyday Robots to perform 700-plus tasks with a 97% success rate. Now, using the robotic demonstration data from RT-1 with web datasets, the company has trained the successor of the model: RT-2. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The biggest highlight of RT-2 is that, unlike RT-1 and other models, it does not require hundreds of thousands of data points to get a robot to work. Organizations have long found specific robot training (covering every single object, environment and situation) critical to handling complex, abstract tasks in highly variable environments. However, in this case, RT-2 learns from a small amount of robotic data to perform the complex reasoning seen in foundation models and transfer the knowledge acquired to direct robotic actions – even for tasks it’s never seen or been trained to do before. “RT-2 shows improved generalization capabilities and semantic and visual understanding beyond the robotic data it was exposed to,” Google explains. This includes interpreting new commands and responding to user commands by performing rudimentary reasoning, such as reasoning about object categories or high-level descriptions.” Taking action without training According to Vincent Vanhoucke, head of robotics at Google DeepMind, training a robot to throw away trash previously meant explicitly training the robot to identify trash, as well as pick it up and throw it away. But with RT-2, which is trained on web data, there’s no need for that. The model already has a general idea of what trash is and can identify it without explicit training. It even has an idea of how to throw away the trash, even though it’s never been trained to take that action. When dealing with seen tasks in internal tests, RT-2 performed just as well as RT-1. However, for novel, unseen scenarios, its performance almost doubled performance to 62% from RT-1’s 32%. Potential applications When advanced, vision-language-action models like RT-2 can lead to context-aware robots that could reason, problem-solve and interpret information for performing a diverse range of actions in the real world depending on the situation at hand. For instance, instead of robots performing the same repeated actions in a warehouse, enterprises could see machines that could handle each object differently, considering factors like the object’s type, weight, fragility and other factors. According to Markets and Markets , the segment of AI-driven robotics is expected to grow from $6.9 billion in 2021 to $35.3 billion in 2026, an expected CAGR of 38.6%. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,143
2,023
"Box extends AI efforts with Microsoft 365 Copilot integration | VentureBeat"
"https://venturebeat.com/ai/box-extends-ai-efforts-with-microsoft-365-copilot-integration"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Box extends AI efforts with Microsoft 365 Copilot integration Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Secure cloud content management provider Box is continuing to advance its generative AI efforts today, announcing a new integration with Microsoft 365 copilot. The new integration is a further expansion of Box’s efforts to use gen AI to help enterprise users better understand and benefit from the value of the content they have with Box. Back in May, the company announced its Box AI initiative, which embeds gen AI alongside the Box user experience to query and summarize data. Box is now growing its AI reach with a plugin that enables organizations to use Microsoft 365 Copilot to Box content. Microsoft 365 Copilot is also a gen AI technology that allows Microsoft 365 users across Word, Excel, Powerpoint and Teams to create and query content. “Microsoft 365 Copilot is really the first partnership announcement that we’re doing in this space,” Aaron Levie, Box cofounder and CEO told VentureBeat. “If you’re working within Microsoft Copilot and you want to be able to pull up a document and ask a question about your Box content, that’s what this new plugin is going to be able to do.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Everyone has an AI now, which will enterprises use? Levie said that the goal of the partnership is to allow customers to leverage their data from Box inside of Copilot in a very seamless way. The ability to enable Box content for Microsoft 365 Copilot could have very broad applicability. Levie noted that Box has more than 110,000 enterprise customers, with tens of millions of individual users in total. He suspects that the majority of those customers are using Microsoft tools in various capacities. A potential issue that enterprises will increasingly face over time with AI could well be determining which AI they should use. Both the directly integrated Box AI and the Microsoft 365 Copilot technologies have similar goals, enabling users to query, summarize and generate content. What is different is the primary environment in which they operate. “I think if you assume that all software has AI embedded into it, and all software generally integrates with some other software, then by definition, we’re going to have AI’s that are talking to each other, ” said Levie. “We’re going to run into the same thing with Salesforce Einstein and ServiceNow too.” Levie said that in the future there might be one piece of software that is federated across multiple provider AIs, although that future is not yet clear by any means. “It’s going to be a very exciting time to figure out exactly what user expectations are and how we are going to have it all come together,” he said. “But we’re confident that we’re going to provide a great set of value propositions when you’re within the Box experience, and then we want to make sure that you can leverage all of your data from Box, no matter what other software application you’re working with.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,144
2,023
"Beyond Work raises $2.5M to make work more ‘human’ with LLMs | VentureBeat"
"https://venturebeat.com/ai/beyond-work-raises-2-5m-to-make-work-more-human-with-llms"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Beyond Work raises $2.5M to make work more ‘human’ with LLMs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. London-based Beyond Work , a startup looking to make working with enterprise tools more human and seamless with large language models. (LLMs), today announced it has raised $2.5 million in a pre-seed round of funding. The investment was led by Moonfire Ventures, with participation from MIT’s E14 fund. Beyond Work said it plans to use this capital to accelerate the development of its human-AI work platform. The technology remains under stealth as Fortune 500 enterprises continue to test it. “We are at a pivotal moment with this technology — it holds the potential to make work more human, but we will only get there if we build from scratch, rather than bolting it onto existing applications. It’s time for real change,” Christian Lanng, the chairman of the company who also serves as the cofounder and CEO of Tradeshift , said. Reimagining static work applications While specifics of the technology being developed remain under wraps, Lanng’s recent blog post and today’s press release seem to suggest that the company is working to change how teams interact with their software platforms. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “In most cases, people are not using the software, it is using them to further the purposes of others. Click a button here, change a bit of data there … and if you’re lucky it will trigger something for someone you don’t know or see — and then, maybe, just maybe, you can get on with your real work,” the chairman wrote. According to the company, most of these work interfaces, interfaces, applications and tools are static , with AI showing up only as add-ons to the existing software. This, Lanng said, will not last long, and most copilots adding more to existing interfaces will feel like “Microsoft Clippy.” The answer to this problem, the company believes, is making software interaction more human by tapping the power of language — our most powerful tool for understanding and communicating. This is where LLMs will come in. “What if we could redesign computing so that language is also the most powerful way to communicate with computers? What if the future of technology mirrored the social skills we have practiced for millennia? This is where LLMs offer true promise. They can replace over-designed, sprawling user interfaces with something much simpler and more human. Just tell your computer what you want it to do. Not with a keyboard and mouse, but in the way you interact with everything else in your life,” Lanng said in the blog post. Though he did not share how exactly Beyond Work’s under-development platform will use LLMs to make this happen (without serving as an add-on), the effort could completely change how enterprise teams work at present. Beyond Work claims it is leveraging LLMs from scratch and building a completely new kind of application and day-to-day experience that is both enterprise-safe and revolutionary. The founding team of the company includes talent from Uber, Tradeshift, Stack Overflow and KMD. No word on official launch As of now, Beyond Work is testing the platform with multiple enterprise customers and focusing largely on improving its quality and experience. It has not shared when it will make public more details about the technology. “Work is one of the most universal parts of the human experience — but sadly, so are its frustrations. This team has an opportunity to undo that and free up human energy and attention in a way that simply didn’t exist before. We can’t wait to see it, and we can’t wait to start using it,” Mattias Ljungman, who co-founded Moonfire Ventures, said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,145
2,023
"Amazon grows generative AI efforts with Bedrock expansion for AWS | VentureBeat"
"https://venturebeat.com/ai/amazon-grows-generative-ai-efforts-with-bedrock-expansion-for-aws"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon grows generative AI efforts with Bedrock expansion for AWS Share on Facebook Share on X Share on LinkedIn Image source: Screengrab from AWS Summit New York livestream Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon Web Service (AWS) is doubling down on its initiatives to be the cloud provider of choice for organizations looking to benefit from generative AI. At the AWS Summit New York event yesterday, the cloud leader outlined its overall strategy for generative AI and announced a series of iterative updates and incremental new services. The latest round of gen AI updates comes as AWS continues to face stiff competition from rivals including Microsoft, with its Azure OpenAI services , as well as the growing set of generative AI services from Google. The core for Amazon is its AI foundation model service called Bedrock , which was announced back in April, with support for models from AI21 , Anthropic and Stability AI , as well the Amazon Titan models. The list of supported models has now been expanded to include Cohere as well the Anthropic Claude 2 and Stability AI SDXL 1.0 models. Beyond expanded model support, Amazon also announced a new Bedrock agent capability to help make it easier for users to build services. In addition to the Bedrock updates, Amazon announced gen AI capabilities for its Amazon Quicksight business intelligence service, and a preview of a vector engine for the OpenSearch serverless search service. The cloud giant also used the event to announce the general availability of the AWS Entity Resolution service to help improve data management. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Generative AI has captured our imaginations for its ability to create images and videos and even generate code, and I believe it will transform every application, industry and business,” Swami Sivasubramanian, VP of databases, analytics and ML at AWS, said during his keynote address at AWS Summit New York. Bedrock gets new agents An agent in generative AI is a tool that can help to execute multiple tasks on behalf of a developer. When Bedrock first launched there were no native agent capabilities, but that’s changing with a preview announced at the AWS event. Sivasubramanian explained that Agents for Amazon Bedrock is a new capability for developers to enable generative AI applications to complete tasks in just a few clicks. He noted that agents can be used to help configure foundation models automatically and help to orchestrate tasks without having to write any code. Sivasubramanian said that the agent securely connects a foundation model to the right data source through a simple API. The agent can also be used to help automatically convert data into machine-readable format. “Agents in Bedrock can take action by automatically making API calls on your behalf and you do not have to worry about complex systems and hosting them because it is fully managed,” Sivasubramanian said. More vectors come to AWS Vector embeddings are an essential element of generative AI, converting content into mathematical representations to enable context and matching. Vectors must be stored in a database with vector capability. To that end, Sivasubramanian noted that today AWS offers vector database capabilities for its Aurora PostgreSQL relational database. This vector support, announced back on July 13, ad is enabled via the open-source pgvector technology. Amazon is now extending vector engine support as a preview feature to its OpenSearch serverless search service as well. “This vector engine offers simple, scalable and high-performing vector storage and search without having to manage any infrastructure,” Sivasubramanian said. Business intelligence gets smarter with generative AI Also of note among Amazon’s AI updates is the integration of generative AI into the Amazon Quicksight business intelligence service. According to Sivasubramanian, business analysts spend a lot of time and effort developing the right data visuals and reports from business data. He said that instead of struggling with complex formulas and commands, with the new generative AI capabilities for Amazon Quicksight, business analysts can use natural language queries to build the reports, visuals and dashboard they need in less time than ever. So, while there was no major groundbreaking news at AWS Summit New York 2023, the steady drumbeat of iterative innovations that aim to help make AI practically useful in enterprise settings continues at the cloud leader. “This is just the beginning, we have a lot more coming this year,” Sivasubramanian said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,146
2,023
"Air Force selects Qylur to explore AI that monitors autonomous vehicles  | VentureBeat"
"https://venturebeat.com/ai/air-force-turns-to-qylur-for-ai-that-monitors-autonomous-vehicles"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Air Force selects Qylur to explore AI that monitors autonomous vehicles Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In an increasingly connected world of autonomous vehicles and edge devices, armed forces around the world are seeking to improve the coordination and performance of complex systems. To this end, the U.S. Air Force has signed a contract with tech contractor Qylur Intelligent Systems to explore how AI can be used in “Collaborative Autonomous Systems,” specifically, helping maintain data integrity and the performance of groups of autonomous vehicles over time. The Small Business Innovation Research contract will fund research and development into Qylur’s “Social Network of Intelligent Machines (SNIM)” AI — “a patented, core technology for ongoing management of autonomous intelligent devices and for maintaining the long-term superiority of their AI performance,” according to Qylur’s news release. But the company is aiming for commercial applications as well. “This is a core technology that we’re putting inside our own systems,” said Qylur CEO Lisa Dolev in a phone call with VentureBeat. “We’re working to go into this world of defense and be helpful as we can to win any advantage for our country. On the commercial side of it, [the technology] can be applied in autonomous cars, autonomous agriculture machines, home robotics — even in medical nano machines.” Qylur’s software stakes its claims on solving the challenges associated with the deployment of on-device AI. SNIM AI provides a performance-monitoring layer to the equipment found on the edges of the network, such as industrial robotics for private companies or drones for the Air Force. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Founded in 2005 by Dolev, Qylur has been active in the business of venue and event security technology , producing the Q Entry Experience, a honeycomb-shaped bag scanner. Small batteries and small datasets: challenges to overcome Qylur’s equipment was deployed at the 2016 Rio Olympics and San Francisco’s Levi’s Stadium, providing insights that allowed the company to discover an obstacle often faced when deploying remote-sensing devices and mobile equipment: small data sets available to train models. Qylur’s initial products in the security space sought to detect guns and explosives, but the actual event of someone trying to hide weapons happened very rarely. It needed a solution. Much like the more familiar online social networks, SNIM AI connects groups of related devices which then use the same set of shared data. Qylur says these pools of resources optimize the accuracy of decision-making and speed up real-world adaptations of the models. These features are relevant to both combat arenas and industrial use cases, as either can be fast-moving, changing environments. Edge devices are limited by battery power and low processing ability when compared to more centralized infrastructure. Qylur’s SNIM AI seeks to alleviate those impediments by tailoring models to specific use and deployment cases. “[SNIM] allows you to have these specialized and customized mission-specific models, so you don’t have to have everything there all the time,” said Dolev. Early experience revealed AI model drift An AI model’s behavior can also change over time, making it necessary to address “AI drift” in ongoing operations. “If you have something that you think is working nearly perfect, in a few weeks, it could not be working perfect — it could be going a little haywire,” said Dolev. The SNIM mitigates AI drift by automatically detecting it, and in response, retraining custom-boosted models and redeploying them to the edge devices. “We’ve understood very early on that this drift happens, and you have to manage it all of the time. Not just once or twice — all the time,” said Dolev. “Managing it can be extremely expensive if you need a whole bunch of data scientists and an ML ops. So [SNIM AI is] an automatic way to do that.” The goal of the partnership with the military is to allow Qylur to adapt the SNIM AI technology into fulfilling the Air Force’s use cases. Qylur also sees ongoing commercialization of SNIM AI as critical next steps to equipping AI-enabled devices in the field. “The easiest thing would be to say this is kind of like an AI for the AI; the guardrails for AI — like a gardener for it,” said Dolev. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,147
2,023
"A year ago, DeepMind's AlphaFold AI changed the shape of science — but there is more work to do | VentureBeat"
"https://venturebeat.com/ai/a-year-ago-deepminds-alphafold-ai-changed-the-shape-of-science-but-there-is-more-work-to-do"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages A year ago, DeepMind’s AlphaFold AI changed the shape of science — but there is more work to do Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI’s ChatGPT may have captured the AI zeitgeist last fall, but it was DeepMind’s AlphaFold AI that shook the science world last summer. A year ago, on July 28, 2022, the Alphabet-owned company announced that AlphaFold had predicted the structures for nearly all proteins known to science and dramatically increased the potential to understand biology — and, in turn, accelerate drug discovery and cure diseases. That built on its groundbreaking work from a year earlier, when DeepMind open-sourced the AlphaFold system that had mapped 98.5% of the proteins used in the human body. Today, DeepMind (now Google DeepMind) says the AlphaFold Protein Structure Database has been used by over 1.2 million researchers in over 190 countries, and that adoption rates of AlphaFold are growing fast in all domains. A few weeks ago, DeepMind CEO Demis Hassabis told The Verge that while AI chatbots have gone viral, he believes it is AlphaFold that has “had the most unequivocally biggest beneficial effects so far in AI on the world.” Nearly every biologist in the world has used it, he pointed out, while Big Pharma companies are using it to advance their drug discovery programs. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “I’ve had multiple, dozens, of Nobel Prize-winner-level biologists and chemists talk to me about how they’re using AlphaFold,” he said, while admitting that “the average person in the street doesn’t know what proteins are … whereas obviously, for a chatbot, everyone can understand, this is incredible.” DeepMind continues to invest in AlphaFold Of course, in an era when top AI companies are dealing with potential regulation, a rising tide of lawsuits, and criticism about model risks, it helps to have a big win with AI that offers unequivocal benefits to humanity. According to DeepMind, AlphaFold has already been used to discover new disease threats in Madagascar; develop a more effective malaria vaccine; develop new drugs to treat cancer; and tackle antibiotic resistance. But the AlphaFold team isn’t resting on its laurels: One of AlphaFold’s researchers, Kathryn Tunyasuvunakool, told VentureBeat in an interview that “there are a lot of problems in proteins that are not fully solved,” and that it would be “wonderful” to see more real-world applications for AlphaFold over the next 10-20 years. “I just want to see AI continuing to make a positive impact on problems in biology,” she said. “It’s such a complicated field with such messy data, and it really feels like the sort of thing where we need computers to help us unpick how this all fits together.” DeepMind is no longer alone in its shape-shifting science prediction efforts: In November 2022, Meta used an AI language model to predict the structures of more than 600 million proteins of viruses, bacteria and other microbes. And it was able to make those predictions in just two weeks. However, Hassabis said on a recent podcast with Ezra Klein that “advancing science and medicine is always going to be at the heart of what we do and our overall mission … that involves us continuing to invest and work on scientific problems like AlphaFold.” DeepMind’s AlphaFold solved the ‘protein-folding challenge’ DeepMind had actually first solved what was a half-century-long biology conundrum — known as the “protein-folding challenge” — in November 2020, when it first released AlphaFold. Proteins, which support nearly all of life’s functions, are complex molecules made up of chains of amino acids, each with its own unique 3D structure. Figuring out how proteins fold into their unique crumpled shapes had been a persistent problem, but AlphaFold offered a new method to accurately predict those structures. The system was trained on the amino acid structures of 100,000-150,000 proteins. “It’s by far the most complicated system we ever worked on,” Hassabis told Klein. “And it took five years of work and many difficult wrong turns.” Tunyasuvunakool said that she was one of the “more pessimistic” people on the AlphaFold team. “I was not at all confident that this is a problem that we will be able to solve — I never really imagined we would get to this sort of impactful level of accuracy,” she said. “It was only later that I started to think: If we actually solve this, this is going to be quite a big deal.” The biggest problem, she said, was the sheer magnitude of different options for how a protein can fold if it wants to go from a linear sequence of amino acids to a complex 3D structure. “There are just billions and billions of combinations for how that structure could look.” In July 2022, DeepMind announced that AlphaFold had predicted more than 200 million protein structures, which was nearly all of those catalogued on a globally recognized repository of protein research. According to DeepMind, a single protein structure can take the whole length of a person’s Ph.D. studies and cost an average of $100,000 to determine experimentally. By predicting the structures of over 200 million proteins, AlphaFold “potentially saved the equivalent of up to 1 billion years of research and trillions of dollars.” There are plenty of protein problems left to solve Tunyasuvunakool emphasized that while AlphaFold solved one big challenge, there are still plenty of “holy grail” problems in the world of proteins that are not fully solved. “A better understanding of protein physics would be a big one,” she said, explaining that AlphaFold mainly predicts static protein structures, but a lot of proteins perform their function by changing their shape over time. “So if you think about something like a channel that decides whether to let things in and out of the cell, those tend to come in two different shapes — and for certain applications, you really care about having this structure versus this one, or knowing about how much time they spend in each of those states,” she said. Understanding that distribution is important for areas like medicine and drug development, she explained: “Having a model that is more aware of protein physics, that was able to predict the multiple states that a protein moves through, would be really helpful.” Overall, she said, the biggest excitement is around seeing the level of uptake of AlphaFold as a tool across the field of biology. “I think it’s pretty unusual for computational biology tools to make this much of a widespread impact,” she said. “At this stage, the paper has had over 10,000 citations — I think I can comfortably say it’s going to be the biggest thing I ever work on.” But DeepMind likely has larger ambitions in the space: In 2021, Hassabis launched biotech startup Isomorphic Labs for drug research, and the company is reportedly getting “closer to securing its first commercial deal” and is “ building on the AlphaFold breakthrough as DeepMind’s sister company.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,148
2,023
"Why healthcare in the cloud must move to zero trust cybersecurity | VentureBeat"
"https://venturebeat.com/security/why-healthcare-in-the-cloud-must-move-to-zero-trust-cybersecurity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why healthcare in the cloud must move to zero trust cybersecurity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Healthcare providers must look beyond the cloud and adopt zero-trust security to succeed in fighting back against the onslaught of breaches their industry is experiencing. Attackers often prey on gaps in network servers, incorrectly configured cloud configurations, unprotected endpoints, and weak to non-existent identity management and privileged access security. Stealing medical records, identities and privileged access credentials is a high priority for healthcare cyberattackers. On average, it takes a healthcare provider $ 10.1 million to recover from an attack. A quarter of healthcare providers say a ransomware attack has forced them to stop operations completely. Healthcare must build on cloud security with zero trust Forrester’s recent report, The State of Cloud in Healthcare, 2023 , provides an insightful look at how healthcare providers are fast-tracking their cloud adoption with the hope of getting cybersecurity under control. Eighty-eight percent of global healthcare decision-makers have adopted public cloud platforms, and 59% are adopting Kubernetes to ensure higher availability for their core enterprise systems. On average, healthcare providers spend $9.5 million annually across all public cloud platforms they’ve integrated into their tech stacks. It’s proving effective — to a point. What’s needed is for healthcare providers to double down on zero trust, first going all-in on identity access management (IAM) and endpoint security. The most insightful part of the Forrester report is the evidence it provides that continuing developments from Amazon Web Services , Google Cloud Platform , Microsoft Azure and IBM Cloud are hitting the mark with healthcare providers. Their combined efforts to prove cloud platforms are more secure than legacy network servers are resonating. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That’s excellent news for the industry, as the latest data from the U.S. Department of Health and Human Services (HHS) Breach Portal shows that in the last 18 months alone, 458 healthcare providers have been breached through network servers, exposing over 69 million patient identities. The HHS portal shows that this digital pandemic has compromised 39.9 million patient identities in the first six months of 2023, harvested from 298 breaches. Of those, 229 resulted from successful hacking, 61 from unauthorized access/disclosure, and the remainder from theft of medical records. Business email compromise (BEC) and pretexting are responsible for 54 breaches since January, compromising 838,241 patients’ identities. Considered best-sellers on the Dark Web, patient medical records provide a wealth of data for attackers. Cybercrime gangs and globally organized advanced persistent threat (APT) groups steal, sell and use patient identities to create synthetic fraudulent identities. Attackers are getting up to $1,000 per record depending on how detailed the identity and medical data are. Lessons from the 2023 Telesign Trust Index , which showed the increasing fragility of digital trust, must also be applied to healthcare. Turning weaknesses into strengths with zero trust Forrester concludes that healthcare providers are prime targets for attackers because they use outdated legacy technologies, especially when storing sensitive patient data. That weakness is magnified by the urgency of getting critical care to patients. “Threat actors are increasingly targeting flaws in cyber-hygiene, including legacy vulnerability management processes,” Srinivas Mukkamala, chief product officer at Ivanti, told VentureBeat. In fact, Ivanti’s Press Reset: A 2023 Cybersecurity Status Report found that all organizations are behind in protecting against ransomware, software vulnerabilities, API-related attacks and software supply chain attacks. Ivanti’s research results underscore why zero trust needs to become an urgent priority in all healthcare organizations, given that many lag behind peers in other industries on these core dimensions. Forrester observed that “CISOs may be reluctant to trust the public cloud, but outsourcing to a multitenant platform can benefit healthcare providers with military-grade AES 256 data encryption that helps prevent data exposure and theft. Global hyperscalers offer compliant instances and consulting services to help meet regulatory compliance. Similarly, EHR systems such as Oracle Cerner and Epic Systems are now offering cloud-based offerings/partnerships.” Every healthcare provider needs a zero-trust roadmap tailored to its greatest threats The goal is to become more resilient over time without breaking budgets or asking for major investments from the board. An excellent place to start is with a zero-trust roadmap. There are a few standard documents CISOs and CIOs running healthcare IT and cybersecurity should use to tailor zero-trust security to their unique business challenges. The first is from the National Institute of Standards and Technology’s (NIST) National Cybersecurity Center of Excellence (NCCoE). The NIST Cybersecurity White Paper (CSWP), Planning for a Zero Trust Architecture: A Guide for Federal Administrators , describes processes for migrating to a zero-trust architecture using the NIST Risk Management Framework (RMF). Second, John Kindervag, who created zero trust while at Forrester and currently serves as senior vice president, cybersecurity strategy and ON2IT group fellow at ON2IT Cybersecurity, and Dr. Chase Cunningham were among several industry leaders who wrote the useful President’s National Security Telecommunications Advisory Committee (NSTAC) Draft on Zero Trust and Trusted Identity Management. The document defines zero-trust architecture as “an architecture that treats all users as potential threats and prevents access to data and resources until the users can be properly authenticated and their access authorized.” The Cybersecurity and Infrastructure Security Agency (CISA) publishes a hub of the President’s NSTAC Publications , providing a valuable index of the committee’s body of work. Proliferating ransomware attacks underscore the need to enforce least privileged access across every threat surface “We know that bad guys, once they’re in the network and compromise [it], the first [breached] machine can move laterally to the next machine, and then the next machine, and the next machine. So once they’ve figured that out, the chances of you having a ransomware breach and having data exfiltrated from your environment increase,” Drex DeFord, executive strategist and healthcare CIO at CrowdStrike, told VentureBeat during an interview. The U.S. Department of Health and Human Services (HHS) Health Sector Cybersecurity Coordination Center (HC3) provides a series of Threat Briefs that healthcare CISOs and CIOs should consider subscribing to and staying current with. The depth of analysis and insight the HCS puts into these briefs is noteworthy. To understand the scale of healthcare providers’ challenges with ransomware, VentureBeat also recommends reading the June 8, 2023 presentation, Types of Threat Actors That Threaten Healthcare. Another brief reveals how nation-state attacks are among the most sophisticated and challenging to stop: the November 3, 2022 Threat Brief titled “Iranian Threat Actors and Healthcare. ” Two high priorities, according to CISOs: a compromise assessment, and a subscription to an incident response retainer service Healthcare providers and supporting organizations need a clear baseline across all systems to verify that their existing IT environments and tech stacks are clean. “When you have a compromise assessment done, [getting] a comprehensive look at the entire environment and [making] sure that you’re not owned, and you just don’t know it yet, is incredibly important,” DeFord told VentureBeat during an interview. DeFord and other CISOs interviewed for this article also advise healthcare CISOs to get an incident response retainer service if they don’t already have one. “That makes sure that should something happen, and you do have a security incident, you can call someone, and they will come immediately,” DeFord advises. IoT, edge computing and connected medical devices make endpoint security a constant battle Most legacy IoT sensors, the machines attached to them, and medical devices aren’t designed with security as a primary goal. That’s why attackers love these devices. Dr. Srinivas Mukkamala, chief product officer at cybersecurity company Ivanti, says business leaders must realize the cost of managing endpoints, IoT and medical devices by continually improving security. “Organizations must continue moving toward a zero-trust model of endpoint management to see around corners and bolster their security posture,” Mukkamala told VentureBeat. Absolute Software’s 2023 Resilience Index shows that the average endpoint has 11 different security agents installed, each degrading at a different rate and creating memory conflicts. This leaves the endpoint unprotected and vulnerable to a breach. Overloading endpoints with too many agents is just as bad as having none installed. CISOs and CIOs in healthcare need to audit every endpoint agent installed and find out if and how they conflict with each other. A core part of the audit is knowing which identities have access rights for each endpoint, including third-party contractors and suppliers. Captured audit data is invaluable in setting least privileged access policies that strengthen zero trust on every endpoint. Protecting patient identities requires making zero trust a priority Healthcare CISOs are under pressure to ensure their IT and cybersecurity investments deliver business value. One of the most valuable assets any healthcare provider has is patient trust. More healthcare providers need to consider how to create secure customer experiences with zero trust. Telesign CEO Joe Burton told VentureBeat that while customer experiences vary significantly depending on their digital transformation goals, it is essential to design cybersecurity and zero trust into customer workflows. That’s excellent advice for healthcare providers under siege by attackers today. “Customers don’t mind friction if they understand that it’s there to keep them safe,” Burton said, adding that machine learning is an effective technology for streamlining the user experience while balancing friction. He told VentureBeat that customers could gain reassurance from friction that a brand, company or healthcare provider has an advanced understanding of cybersecurity and, most importantly, of the importance of protecting patient data and privacy. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,149
2,023
"Cyera raised $100 million for data security for AI-driven enterprises | VentureBeat"
"https://venturebeat.com/security/cyera-raises-100m-expand-data-security-platform-ai-driven-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cyera raises $100M to expand data security platform for AI-driven enterprises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cyera , a data security company, announced that it has secured a $100 million series B investment led by Accel, with participation from existing investors Sequoia and Cyberstarts. Redpoint Ventures also joined as an investor. With this latest funding round, Cyera has raised a total of $160 million since emerging from stealth in March 2022. The company claims that its revenue has grown 800% over the past year as security teams prioritize data security across hybrid cloud environments, particularly among S&P 500 enterprises. “Securing $100 million in the current economic climate, where there have been many reports and analyses highlighting the dearth of funding, down-rounds and early exits, validates our vision and relentless execution to redefine data security for the enterprise,” Yotam Segev, co-founder and CEO of Cyera, told VentureBeat. Segev said he noted a trend over the past year where legacy companies have attempted to adopt data security and posture management (DSPM) capabilities, despite facing significant limitations due to outdated architectures. In contrast, Cyera made strategic investments to expand its AI-powered data security platform, focusing on enterprise-level data discovery, classification and security. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “That’s how we raised over $160 million to date,” he said. “The fundamental difference with Cyera is a focus on truly knowing what data represents. Our platform was purpose-built to leverage cloud-scale AI and machine learning to dynamically discover, classify and understand an enterprise’s unique data.” According to Segev, Cyera’s security platform sets itself apart by going beyond analyzing data through plain pattern recognition or threat signature detection. Instead, it actively and consistently identifies and draws inferences from the customer’s data environment. He said this approach enables customers to gain a comprehensive understanding of their data, including its nature, significance and the underlying value it represents. Segev emphasized that this capability enhances security, streamlines compliance efforts and empowers organizations to embrace a data-driven approach. By leveraging their data effectively, businesses can expedite their operational processes and foster innovation. Furthermore, the platform automates remediation workflows to minimize the attack surface and ensure operational resilience at the speed and scale of the cloud. “Our process is fully automated, continuous, and can be scripted like any other modern development (continuous integration / continuous deployment) process operates. Legacy providers require manual efforts, and newer cloud infrastructure and SaaS application vendors are all narrowly focused and create more silos in the enterprise,” Segev told VentureBeat. “Cyera is designed and architected to discover, classify, evaluate and secure all of an enterprise’s data, everywhere.” Leveraging LLMs to aid enterprise data security Cyera’s platform uses large language models (LLMs) to automatically discover, classify and secure sensitive data from various sources. A unified policy engine actively detects misconfigurations, suggests specific access controls and generates new security policies to ensure compliance and govern access to sensitive data. “The Azure OpenAI integration builds on Cyera’s use of machine learning and large language models. Our LLMs can automatically differentiate between roles (like customers and employees), understand the data’s origin and purpose, and learn when data can be used to identify an individual,” added Segev. “The platform then applies the correct security, privacy and compliance policies to uncover and remediate exposures to structured and unstructured data wherever it is being managed.” Segev said that the integration of OpenAI’s LLMs will allow security practitioners to use natural language to align data stores, data classes, and issues with the specific technical or business problem they aim to address. For instance, by employing semantic search, one can instantly identify the issues that increase an organization’s risk of a breach, such as accessibility of protected healthcare information (PHI) by unauthorized users. Cyera stated that its unified policy engine can also detect misconfigurations, provide tailored access control recommendations and generate new policies to govern data access. “Cyera can answer fundamental questions around what data an enterprise has, where it is, who can access and is using it, what exposes it to security, privacy and compliance risk, and how to remediate those risks, all from a single platform,” explained Segev. “That is unique because it provides a foundational and centralized location for every function in an enterprise to understand [whether] appropriate controls around storage, management and use are being applied.” Prioritizing data in the cloud With the newly acquired funding, the company aims to accelerate the development of its cloud-native platform. Cyera said this initiative will empower security teams to handle data security incidents effectively, manage policies and controls, and streamline workflows across their entire data landscape. “Beyond the deep knowledge of data, and cloud-native architecture, we are also focusing on static posture improvements (what DSPM represents), real-time detection and response to changes (Data Detection and Response, or DDR), achieving least privileged access to data and recognizing misuse and anomalies in access (a.k.a. Data Access Governance or DAG), and data privacy,” said Segev. Segev asserts that adopting this approach allows enterprises to prioritize data in their security strategy, streamline compliance audits and information requests (such as the right to be forgotten or records of processing activity), and ultimately embrace a data-driven approach to address daily challenges and seize growth opportunities. “When implementing Cyera, we immediately got a full picture of our cloud data landscape,” said Erik Bataller, VP of security at ACV Auctions, in a written statement. “The platform showed us that we had a lot of ghost data that was not being accessed or used. Eliminating it will save us over $50,000 per year in cloud storage costs.” To secure the use of generative AI , the company has introduced SafeType, a browser extension designed to anonymize sensitive data entered into ChatGPT. SafeType proactively detects sensitive information and prevents it from being transmitted to the gen AI platform. When a user inputs sensitive data into ChatGPT, SafeType promptly recognizes it and educates the user on why sharing such information is discouraged. It also offers options to anonymize the data or delete it from the session. The SafeType extension is available for the Google Chrome and Microsoft Edge browsers and on Cyera’s website. “The software uses the permissive Apache 2.0 license, making it easy for developers to contribute to the project or use the code in their applications,” Segev told VentureBeat. “It is released under community preview, and is not connected to Cyera’s data security platform, does not share code with the platform, and does not interact with Cyera in any way.” What’s next for Cyera? Segev emphasized that Cyera aims to enable businesses to fully harness the power of their data through AI. He said that the recent investment confirms Cyera’s dedication to assisting chief information security officers (CISOs) in addressing their paramount challenge of securing data in the cloud era. Segev highlighted that the additional funding will also expedite the development of Cyera’s data security platform and support the expansion of its global go-to-market initiatives. “We would not have been able to secure a $100 million round in this economic and funding environment without having a transformational impact on our customers’ data security programs. We aim to further solve critical problems for CISOs, holistically across their entire SaaS , PaaS and IaaS infrastructures,” said Segev. “This funding enables us to accelerate product development, hire the best engineering talent and expand our go-to-market initiatives. Our team is incredibly passionate about solving CISOs’ most pressing security challenges.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,150
2,023
"A blueprint for game developers: How to manage upcoming internet trust and safety regulations | VentureBeat"
"https://venturebeat.com/games/a-blueprint-for-game-developers-how-to-manage-upcoming-internet-trust-and-safety-regulations"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Sponsored A blueprint for game developers: How to manage upcoming internet trust and safety regulations Share on Facebook Share on X Share on LinkedIn Presented by Modulate This article is part of GamesBeat’s special issue, Gaming communities: Making connections and fighting toxicity. Over a dozen new and pending internet trust and safety regulations are slated to seriously impact game developers in the near future, from the United States and the EU to Australia, Ireland, the U.K. and Singapore. The regulations target the rise of hate speech, harassment and misinformation driven by major world events, including COVID-related misinformation , potential election influence and the rise of white supremacist extremism. On top of that, privacy laws are being revisited, such as California’s Age-Appropriate Design Act, modeled off the U.K.’s Children’s code. And as the DSA and other regulations begin kicking into force in 2024, experts only expect enforcement to become more common. Unfortunately, “No one reported it! We didn’t know there was illegal content!” won’t cut it anymore. Today, regulators and consumers are looking for evidence that the studios are taking the problem seriously, with a focus on harm reduction. In other words, game studios must now proactively minimize any harm on their platforms since they could be liable for such harms even if they were never reported by users. The major requirements for compliance Compliance has eight major components, which may sound daunting at the outset. It includes writing a clear code of conduct and updating terms of service and producing regular transparency reports, with the help of internal teams who can work with regulators as needed. On the platform, developers need to find ways to minimize harmful content at scale, especially terrorism, CSAM and grooming – which means building out new moderation and monitoring tools. A user report portal and an appeals portal, to which staff should respond in a timely manner, are crucial. Finally, developers should conduct regular risk assessments and implement UI changes to increase privacy by design, as well as configure privacy by default for all children. Before anything, however, because of the complexity involved, it’s critical to consult with legal and regulatory experts to ensure the appropriate steps are in place to comply with global regulations. Here’s a look at each of those steps, and how developers can prepare. 1. Writing a clear code of conduct While an in-depth code of conduct is now a regulatory requirement, it’s also good sense. The vast majority of misbehavior by players is due to unclear guidance on what’s permissible, and much of the trust lost between studios and their users comes from “black box” reporting, appeals or actioning processes. A code of conduct should explain precisely which types of behaviors are harmful and could result in an action. It should also identify clearly what types of actions can be taken, and when. Finally, the code of conducts should explain what recourse players have if they experience harmful content, feel they’ve been wrongfully actioned, or want to limit the use of their personal data. If you’re looking for a place to start, check out other studio codes, or consult with a regulatory expert. 2. Producing regular transparency reports Transparency Reports are meant to fill a void where regulators and consumers feel platforms have been insufficiently open regarding the severity and prevalence of harmful content on their platforms – and what measures the platform is taking to resolve these issues. The most efficient way to integrate reports into your strategy is adopting a technology solution, powered by machine learning. Today, innovative moderation platforms like Modulate’s AI-powered ToxMod can automatically track action rates, how frequently appeals result in turnovers and the accuracy of player reports. It can also proactively provide insight into the total number of harmful behaviors across the platform, and even the number of individuals exposed to illegal content – both crucial components of an effective transparency report – and most studios currently lack the tools to measure. 3. Minimizing harmful content Most platforms today rely primarily on user reports to identify harmful content, including terrorism, child sexual abuse material and grooming content, but that’s never been sufficient protection. Again, an AI-powered moderation tool can identify, log and escalate issues, automating harmful content elimination. How studios handle issues like harassment and cyberbullying will also be scrutinized. With regulators and enforcement agencies shifting towards a “harm standard,” sufficiently bad outcomes for users can create liability for a studio, even if they never received a user report. An AI tool like ToxMod can proactively identify toxic voice chat from across your ecosystem, categorize it and hand you a prioritized queue of the very worst stuff your users are up to. And it’s smart enough to filter out playful trash talk, reclamation of slurs or villainous roleplay, as opposed to true harassment and bullying. 4. User report portals, appeals portals and timely responses Most platforms already offer these platforms, but some of these new regulations also require that platforms respond to every single report, in a timely way, and include context about what decision has been made and why. But assessing player reports can be quite costly, especially given that many users submit false reports out of malice or mischievousness. While human moderators can never be replaced, moderation solutions can help make them more efficient by automating some of the busywork like assessing when action on a ticket needs to be taken and closing tickets and issuing reports when there’s no evidence of a violation. Violations are escalated to the studio’s moderation team, with enough available evidence to make an accurate call and be able to explain, when they action a user, what part of the code of conduct was violated and provide clear justifications for any punishments, which not only ensures compliance but also, according to EA , can massively reduce repeat offenses from players. 5. Regular risk assessments Again, automated moderation platforms are your best bet here to minimize risks and offer comprehensive protections to players while complying with privacy regulations. It’s vital to use a platform that has documented high accuracy across major types of harms and has been battle-tested by top games. A solution can also provide insights into the behaviors of players, for a view into the greatest risks to their players’ safety and experience, as well as offer design improvements and moderation strategies to attack the problem at its source. 6. Configuring privacy by default for all kids California’s Age-Appropriate Design Act includes a potent requirement that platforms ensure children start with the strictest possible privacy protections enabled. While this is ultimately just a UI update for the platforms, it does raise an important question – how do you know which users are children? It’s essential to incorporate age assurance like ID or payment checks very early in the onboarding process, but they’re not foolproof, as the recent Epic Games / FTC case shows. If a developer knows there are children on your platform, it’s even more important to go the extra mile to identify and protect them. Tools like voice-based analysis can identify underage users – Modulate’s own system has reached over 98% accuracy and counting. Staying ahead of the game Proactive steps like these are essential to ensure compliance, but partnering with a safety and privacy expert can help take on some of the burden. They can provide significant relief to internal teams, ranging from technology solutions that help minimize harmful content at scale, to support with risk assessments, transparency reports and more. In the end, not only are you meeting regulatory standards, but also creating a safer and more positive online experience for users. Written in collaboration with Tess Lynch, Privacy Associate at Premack Rogers. Dig deeper: Go here for more info on how game studios can keep tabs on the changing regulatory landscape, take proactive steps and incorporate sophisticated technology to scale privacy and safety efforts. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,151
2,023
"Samsung Gaming Hub reaches up to 21M devices with 3,000 streamed games | VentureBeat"
"https://venturebeat.com/business/samsung-gaming-hub-reaches-up-to-21m-devices-with-3000-streamed-games"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Samsung Gaming Hub reaches up to 21M devices with 3,000 streamed games Share on Facebook Share on X Share on LinkedIn Hip Hop Gamer is drumming up some excitement for Samsung Gaming Hub and Xbox cloud gaming. Samsung Gaming Hub , the video game streaming platform accessible directly via Samsung Smart TVs, celebrates its first anniversary today with the announcement of a new brand identity and continued expansion. The platform has grown rapidly in its first year, with monthly active users increasing thirteen-fold from July 2022 to May 2023 and players from nine territories able to access high-quality game streaming on over 21 million Samsung devices. The platform now offers over 3,000 games through partner services, including triple-A titles like Halo Infinite, indie games and arcade classics. It kicked off with streaming partners such as Xbox, Nvidia GeForce Now and Utomik, and it has since added Amazon Luna, Antstream Arcade, and Blacknut as streaming partners. Samsung also announced its Samsung Game Portal yesterday to make it easy for gamers to buy gaming accessories at Samsung.com. “Today, we’re excited to reflect on the first steps that Samsung Gaming Hub has taken to deliver on that promise, and share a first look at the new brand identity rolling out worldwide in the coming months,” said Mike Lucero, head of product management for gaming at Samsung Electronics, in a statement. “Samsung Gaming Hub brings the best of gaming together, invites players to come as they are, and empowers those players with more options in how they want to play: whether it’s using their Samsung Smart TV remote to play arcade classics like Pac-Man and Space Invaders, or digging in for a full Halo campaign with a PlayStation DualSense controller.” Samsung Gaming Hub is also available on the 2023 TV lineup and the Odyssey OLED G9 monitor, and is accessible across multiple regions with added compatibility for partner apps with 2021 smart TVs. Samsung Gaming Hub and partner apps through the Smart Hub are compatible with 90% of Bluetooth controllers on the market and can be played using a Samsung TV remote. The new brand identity illustrates Samsung’s commitment to democratizing gaming for everyone by unlocking access to thousands of games with almost any controller. Samsung Gaming Hub is available in Brazil, Canada, France, Germany, Italy, Korea, Spain, the U.S, and the U.K. with availability in more regions coming soon. Bethesda’s Starfield will be coming to the hub in September. During the past year, Samsung showed up with its “Playing is believing” campaign at events like Summer Game Fest, The Game Awards, Gamescom and San Diego Comic-Con, where Samsung Gaming Hub was featured. The platform’s seamless access to games without downloads or waiting has impressed creators and journalists who were initially skeptical about game streaming, with many asking “what’s next” for Samsung Gaming Hub. I played Halo Infinite on the hub as a streamed title, and it worked pretty well at the recent Summer Game Fest. The aim of the refreshed branding is to establish Samsung Gaming Hub as an innovative leader in gaming and reinforce Samsung’s dedication to game streaming across its devices. Samsung believes platform’s continued expansion and focus on accessibility make it a compelling choice for gamers looking to play their favorite games without the need for a console or PC. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,152
2,023
"How human-centered automation adds value to IT service desks | VentureBeat"
"https://venturebeat.com/automation/how-human-centered-automation-adds-value-to-it-service-desks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Spotlight How human-centered automation adds value to IT service desks Share on Facebook Share on X Share on LinkedIn Presented by TOPdesk AI and automation can make your service desk future-proof. In this VB Spotlight you’ll learn how to automate the crucial IT tasks that will reduce end-user down-time and boost customer experience, decrease IT service desk calls by 40% and more. Watch for free on demand. With the growing maturity of AI and automation, there’s a huge opportunity for IT service desks to take much of the repetitive, time-consuming and error-prone work out of the hands of humans. From approvals to software rollouts, automation can help serve end users more quickly and more efficiently. But it’s not a matter of automating anything that can be automated — there are core factors that every enterprise should consider when first evaluating the state of their service desks and operations, says Barclay Rae, consultant, author and co-host of the Enterprise Digital podcast. “We also have to make sure that we understand who our customers are, whether they’re ready for that and what the impact will be on them,” Rae says. “I can’t suddenly stick something in that might make it more efficient for me, but it might annoy my customers. Context is everything. I don’t want to sound as if I’m trying to slow it down. But we must be clear on what we’re expecting to get out of this and how we will then go about implementing it.” It’s also crucial to not only have clear, manageable goals that directly improve the customer experience, but ensure that there’s usable, accurate data and solid processes in place already, or automation will fail, and that’s also not a great look for a company. Lower-hanging fruit that immediately offers an impact are the standard higher-volume requests or processes — repetitive requests around passwords, processes like approving and installing new computers or new software, says Jeffrey Jacoby, US services team lead at TOPdesk. You could even consider routine cross-departmental processes such as onboarding, offboarding, or transfers which happen often, sometimes daily, for larger organizations. “Leveraging automation for these standard processes can simplify the workflow for your technicians and streamline the workflow process overall,” Jacoby explains. “Some benchmarks, which you could think about more down the road could be chatbots to make the interface a bit easier for those end users to add input, maybe even supplier integrations or third-party systems, like an asset management system.” Creating a roadmap to automation Of course, like any other technology implementation, automation requires a plan and expected deliverables. IT automation needs to be handled carefully because so many processes touch the end consumer directly. Automating processes on the backend, such as reviewing queues and backlogs, can be handled in the background and very quickly. But most of the processes involved will require some interaction with the business or customers, such as incidents and requests, which you want to make as seamless as possible. “The simpler we can make that, and the less onus we put on the customer to make some decision about it, the better,” Rae says. “We want to have a nice easy portal to interact with. I want this, I’m already approved. That’s where the whole approval system behind the scenes needs to be agreed with your business in advance, rather than just saying you’re going to plug it in and it’s going to work. It’s not going to work if all it does is automate a backlog and a queue for some managers to approve stuff.” That involves tasks on the roadmap such as sitting down with users, whether internal, external or both, to make clear where they need to be involved, how they will interact with those processes once they’re established, and how they’ll test any implementation. Natural language systems, and bots particularly, generally take longer. The more questions a user can ask, the more significantly you’re multiplying potential results and outcomes. “You have to be realistic about those things,” Rae explains. “People go on planes and read in-flight magazines that say, oh, yes, you can automate this and it’s fantastic. But there’s a lot of work to be done to make sure that it works properly. And yes, they are easy to use. Yes, they are cheaper and easier to deploy. But that doesn’t take away the fact that you have to consult with people and build that into your planning.” “We want to know and make sure that everyone is on the same page, knowing which processes are automated and which still require that manual input,” Jacoby adds. “After testing and deploying, monitoring and data insights will help for optimizations that can be made further down the road.” The human element of automation The technical benefits of automation are clear, but the people in the equation are the most important element, and planning needs to take that into consideration. The anatomy of a normal service desk call includes an array of components: the technical, the business, and the people and emotional, Rae says. So, no matter where you are in a system, the human using it has to be able to back out and return to the human service agent behind the system, because sometimes there’s no replacing that connection. “In some ways, when we’re talking about automation, it forces us to re-appraise our own value to some extent,” he explains. “What are we good at? What do we need to do? What do we still need to maintain and not lose, just by doing this? There’s no point in just doing it for the sake of it.” There’s also the fact that older demographics tend to not want to interact with automated systems at all, and it’s important to be mindful of that as service providers, he adds. And it has nothing to do with that stereotype that old people are bewildered in the face of new tech — that’s one that should be discarded. “It’s whether I will trust it,” Rae explains. “It’s whether the interface is working in a way that I can use, understand, enjoy and get value from, with trust being a big part of that. I know some suppliers that I deal with where I will just not trust or use their system. I’ll do everything to get around it and talk to a person. And then there are others where I would have never thought of them as being particularly technical, but they have a very simple but effective automated interface.” The human element of ROI The most important metrics of automation come down to the human side of things, again, to determine whether a solution is providing an adequate ROI. That includes the actual cost and time savings of the task, along with productivity and efficiency of staff impacted. For example, a task that can improve the routing of incidents or tickets and cut down the number of reassignments, or that reduces the number of incidents that users experience, rather than just turning them over week to week. There’s also the feedback of the end-user base, Jacoby says. “Are they happy with the automations that we’ve deployed?” he says. “It’s not just looking at the repetitive tasks and how many we’re automating, but also are people satisfied with the automations we have given them to use?” Watch free on demand here. Agenda Core factors to evaluate the current state of your business and service desk Transform and simplify your IT processes with automation Accelerate delivery of internal processes and services Presenters Barclay Rae , Consultant; Author; Co-host of the Enterprise Digital Podcast Jeffrey Jacoby , US Services Team Lead, TOPdesk Art Cole , Moderator, VentureBeat The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,153
2,023
"The Transform AI Survey: Help discover the state of generative AI | VentureBeat"
"https://venturebeat.com/ai/the-transform-ai-survey-help-discover-the-state-of-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event The Transform AI Survey: Help discover the state of generative AI Share on Facebook Share on X Share on LinkedIn VentureBeat is on a mission to uncover the state of today’s AI technology landscape, and we need your input. We’re not only looking for the companies currently getting their hands dirty, integrating generative AI into workflows and product development, but the organizations just getting started on harnessing the power of new AI innovations. So it’s time for our annual AI survey , alongside our flagship event, VB Transform 2023: Get Ahead of the Generative AI Revolution , in San Francisco on July 11-12 and online. By participating in the survey, you’ll help us illuminate the challenges businesses are facing, get a look at how AI is evolving in the real world, learn exactly where your company stands on the AI adoption curve and more. It’ll take only a few minutes of your time. In return, you’ll gain an exclusive look at the full survey results. Plus, respondents will receive an invitation to join this year’s VB Transform event, where VentureBeat CEO Matt Marshall will share the top takeaways and trends derived from the survey results. The deadline is coming up fast — submit your answers before July 3, and then get ready to embrace the transformative power of generative AI at VB Transform 2023. Over two days in San Francisco, attendees will hear from top industry experts on the evolving strategy and technology that’s surrounding the evolution of generative AI, from how companies are applying it in innovative new ways with real-world case studies, to the businesses that are expanding its possibilities with OpenAI plugins, automation, data analytics, intelligent IoT, computer vision and more, in vertical tracks across healthcare, finance, retail, manufacturing, security and technology. Take the AI survey by July 3, or register today to join the conversation at VB Transform 2023. >> Follow all our VentureBeat Transform 2023 coverage << The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,154
2,023
"Sourcegraph unveils Cody 5.1, a free code AI tool that can write entire files and tests | VentureBeat"
"https://venturebeat.com/ai/sourcegraph-unveils-cody-5-1-a-free-code-ai-tool-that-can-write-entire-files-and-tests"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sourcegraph unveils Cody 5.1, a free code AI tool that can write entire files and tests Share on Facebook Share on X Share on LinkedIn Image Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Sourcegraph , a leader in universal code search and AI-assisted software engineering, announced the release of Cody version 5.1 today, a major upgrade to its AI coding assistant. The new version provides Cody with a broader view of code context across repositories and improved automation capabilities, allowing it to generate code, fix bugs and refactor projects with less human intervention. In an exclusive interview with VentureBeat, Sourcegraph CEO Quinn Slack discussed the new Cody desktop app and its ability to build context for code AI. By allowing developers to point Cody at their local code, the app can better understand the codebase and even write entire tests and files. “Cody now has a deep understanding of codebases that lets developers trust it to write entire files, fix bugs and answer questions about code they’ve never even seen,” he said. The key enhancements in Cody 5.1 , according to Slack, are the ability to understand context across multiple repositories in a codebase and new automation “recipes” that can perform more complex software engineering tasks like optimizing performance, fixing code smells and generating unit tests. Developers get inline access to Cody through a chat interface in their code editors, and Cody can now make changes directly to code. Cody 5.1 poses challenges for competitors like GitHub’s Copilot , an autocomplete tool that relies primarily on a developer’s current code context. “Copilot was awesome when it was released two years ago, but it hasn’t really changed that much,” said Slack. “Anyone who’s used ChatGPT knows AI could do so much more than a fancy autocomplete.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Better autocomplete and new recipes The wider code context comes from Sourcegraph’s strengths as a leading code search and analysis platform, which Cody now taps into. “Cody benefits from 10 years of us building a leading code search engine,” Slack said. The multi-repository context and more advanced natural language understanding enable Cody to handle ambiguous questions and requests, as well as write idiomatic code by learning patterns across a codebase. Slack explained that the desktop app generates a local code graph by indexing the code for search and building embeddings for semantic search, enabling the editor to communicate with the app for context when developers use Cody. “Cody is the first code AI to autocomplete based on context from the entire repository, using embeddings-based semantic search,” Slack told VentureBeat. “This means Cody can generate better code that uses more of your codebase’s own APIs and idiomatic usage patterns, compared to GitHub Copilot and others that only use recent files and open tabs.” Going beyond autocomplete Slack also said that Cody 5.1 goes beyond autocomplete and can perform higher-level coding tasks such as writing entire files, tests, docstrings, variable names, release notes, pull request descriptions, optimizing performance, fixing code smells and answering questions about the codebase. “Cody can explain, write, fix and refactor code using your codebase’s own APIs, documentation, and usage patterns,” said Slack. “This goes way beyond autocomplete or prompt engineering. It’s possible only because Cody supplies context about your own code to a powerful LLM [large language model], so it can perform higher-level coding tasks.” Cody 5.1 also introduces new features such as inline chat, which allows developers to ask questions and request changes on specific regions of code files; support for JetBrains IDEs, such as IntelliJ, PyCharm, WebStorm; and the Cody desktop app, which makes it easy for individuals to use Cody on their private code in their editor and in a chat UI. Cody 5.1 is free for developers on both public and private code, with a generous rate limit. Sourcegraph charges only for team/company/enterprise features or for exceeding the rate limit. Sourcegraph Enterprise Server users need to upgrade to version 5.1 to get the new features of Cody. According to Slack, Cody 5.1 uses more context from the entire codebase and multiple repositories, as well as a more powerful language model, Anthropic Claude , to generate more accurate and consistent code suggestions. The future of AI in coding Discussing the role of the open development community in contributing to Cody 5.1, Slack said, “Cody is open source. It’s Apache 2.0, and we’ve received a lot of contributions. I think we’ve got 20 different contributors so far and w’ve got hundreds of people on our Discord.” He further emphasized the importance of having an open platform and API for developers to get the most out of a product like Cody. As for the future of AI in coding, Slack envisions a future where AI agents can take multiple steps to improve code without human intervention. However, he believes that building trust between developers and AI is crucial before reaching that stage. “We’re really excited about [the future of AI in coding],” said Slack. “We’re tracking that really closely. We’re building up to that with Cody as well. Now, we have to proceed cautiously, because at the moment, you have a code AI writing code where no human reviews it, then that’s the point at which the limits to adoption are off.” Sourcegraph’s vision for AI The new release is an important step for Sourcegraph in its vision for AI that can automate complex, multi-step software engineering tasks. The company has to proceed cautiously, said Slack, to ensure the AI generates code and outcomes that are appropriate for existing codebases in enterprise settings. But progress toward more advanced automation could significantly boost developer productivity. “Our approach — more and better context, more powerful LLM — is different from that of other AI code autocomplete tools that optimize for limited context and small models,” he said. “We’re optimistic that this maximal approach will definitively surpass the minimal approach.” Sourcegraph is a San Francisco-based company that was founded in 2013 by Slack and Beyang Liu. The company has raised $248 million in funding from investors such as Sequoia Capital, Andreessen Horowitz, Insight Partners and Geodesic Capital. Sourcegraph’s annual revenues are estimated to be between $10 million and $50 million, and it has around 160 employees. Sourcegraph’s customers include Amazon, PayPal, Lyft, Uber, Yelp, Cloudflare, Plaid, GE and Atlassian. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,155
2,023
"Salesforce's Sales GPT and Service GPT integrate generative AI | VentureBeat"
"https://venturebeat.com/ai/salesforce-launches-sales-gpt-service-gpt-to-ease-customer-interactions-through-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce launches Sales GPT, Service GPT to ease customer interactions through generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Salesforce today introduced new generative AI workflow tools for Sales Cloud and Service Cloud during its presentation at World Tour London. The new capabilities — Sales GPT and Service GPT respectively — are designed to simplify workflow and customer engagement for sales and service teams. The company said the tools will enable teams to accelerate deal closures, anticipate customer needs and enhance productivity. Salesforce’s AI solution, Einstein GPT, will power the new GPT services from the backend, operating within an open ecosystem using proprietary real-time data. To address enterprise data security and compliance, Salesforce said that the Einstein GPT’s trust layer will protect sensitive customer data, preventing the large language models (LLMs) from retaining it and thereby maintaining data governance. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Through our new solution, we will be offering unique, trusted AI capabilities with embedded security, ethical guardrails, and guidance to catch potential problems before they happen,” Bill Patterson, EVP and GM of C360 applications at Salesforce, told VentureBeat. Salesforce also announced plans to integrate trusted generative AI capabilities into the workflow of the company’s various other offerings, such as Marketing, Commerce, Slack, Tableau, Flow and Apex. In line with other customer-centric vendors, Salesforce joins the wave of generative AI technology advancements with this announcement. Numerous tech giants and smaller vendors have unveiled or announced plans for integrations of generative AI in recent months. Enhancing customer experience through generative AI Salesforce emphasized the potential of generative AI to transform the roles of sales and service professionals. In a recent survey by Salesforce of over 4,000 full-time employees, approximately 73% of workers expressed concerns about new security risks associated with generative AI. Despite these concerns, the majority (61%) already use or intend to use the technology in their work. The survey also found that nearly 60% of those planning to use generative AI admit to a lack of knowledge about its implementation with respect to trusted data sources and data security. “We believe that mainstreaming the power of generative AI is dependent on first building a foundation of trusted data, security and ethics,” Salesforce’s Patterson told VentureBeat. The new generative AI offerings will assist CX and CRM vendors’ platforms automatically generate personalized emails. With Einstein GPT, Sales Cloud users can create relevant emails through Sales GPT using CRM data. The company claims that sales reps will no longer need to take notes manually, as calls will be automatically transcribed and summarized. This should improve productivity by enabling prompt follow-ups. Sales and service Sales GPT encompasses the entire sales cycle, including account research, meeting preparation, and drafting of contract clauses. It offers AI-generated summaries and actions integrated with the Salesforce CRM platform, ensuring automatic updates. Service GPT, for its part, will empower field service teams with AI-driven personalized responses, automatically generating them based on real-time customer data. The company asserts that this will enable service agents to expedite resolution of customer issues. “With Service GPT, customer service teams can harness real-time data and AI they can trust to deliver experiences that help you stay ahead of the curve and scale highly personalized service to every customer,” explained Patterson. “Instead of manually drafting replies to common issues, service agents can use Service Replies to address customer problems quickly and accurately. By freeing time for service agents, they can better assist customers with more complex issues.” Productivity plus security Patterson pointed out that Einstein GPT will combine AI-powered productivity and data security in Sales GPT and Service GPT. In addition to benefiting from the AI productivity enhancements provided by Einstein, which generates over 200 billion AI-powered predictions daily, users can rely on the zero-retention data policy of the Einstein GPT Trust Layer. This policy ensures responsible handling of users’ data and their customers’ data. “AI has been integral to Salesforce for years as we’ve integrated Einstein AI technologies across the Customer 360 platform. Generative AI now has the potential to introduce exciting new opportunities across sales, customer service, marketing, commerce and IT, and there’s an undeniable level of enthusiasm to realize that potential,” said Patterson. “By unifying data with Data Cloud and Customer 360, organizations can now unlock a complete view of every customer, allowing them to create unique experiences.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,156
2,023
"Runway draws fresh $141 million as next-level generative AI video begins to emerge | VentureBeat"
"https://venturebeat.com/ai/runway-draws-fresh-141-million-as-next-level-generative-ai-video-begins-to-emerge"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Runway draws fresh $141 million as next-level generative AI video begins to emerge Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Runway , one of the hottest generative AI startups with its text-to-image video tools, has announced a fresh round of funding, adding $141 million in a series C from Google, Nvidia and Salesforce Ventures, among other investors. The New York City-based company said in a press release that it will use this new financing to “further scale in-house research efforts, expand its world-class team, and continue to bring state-of-the-art multi-modal AI systems to market, while building groundbreaking and intuitive product experiences.” Runway began with a mission to build AI for creatives In March, VentureBeat spoke to Runway CEO and cofounder Cristóbal Valenzuela. He discussed the gated launch of Runway’s Gen-2 tool, which is now generally available, and the company’s founding four years ago with a mission to build AI tools specifically for artists and creatives. “Since then, we’ve been pushing the boundaries of the field, and building products on top of that research,” he said, saying Gen-2 is a “big step forward” in the company’s text-to-video efforts. He pointed to the company’s millions of users, ranging from award-winning movie directors and advertising and production companies all the way down to small creators and consumers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We’ve built an incredibly tight community that has helped us understand how actually creatives are using generative AI in their work today,” he said, pointing to Runway’s work for the Oscar-winning movie Everything Everywhere All at Once. One of the film’s editors used Runway to help with effects on a few shots. “So we have a lot of folks who have helped us understand how these models are going to be used in the context of storytelling,” he explained. “We’re heading to a world where most of the content and media and videos that you consume will be generated, which requires a different type of software and tools to allow you to generate those kind of stories.” Runway’s growth comes as artists push back on generative AI Runway’s efforts, however, come at a time when artists are pushing back against generative AI. For example, thousands of screenwriters have been on strike for over two months, halting many movie and television productions, because they want limits on the use of generative AI. And VentureBeat recently reported that Adobe Stock creators are unhappy with the company’s generative AI model Firefly. According to some creators, several of whom VentureBeat spoke to on the record, Adobe trained Firefly on their stock images without express notification or consent. There are also several lawsuits pending in the generative AI space. Just today, for example, plaintiffs filed suit against OpenAI, claiming the company used “stolen data” to “train and develop” its products including ChatGPT 3.5 , ChatGPT 4 , DALL -E and VALL-E. Three cofounders attended art school “We do a lot of listening and are part of the community,” said Valenzuela, pointing to Runway’s AI Film Festival in March as an example of driving conversations and understanding how these technologies will be used by professional filmmakers and storytellers. “I do think there’s confusion around how these algorithms are already being used in creative environments,” he said. “There’s a misconception that … you have the systems do everything for you and you have no input. We don’t see it like that. We see these tools as tools for human augmentation. They’re tools for enhancing creativity. They’re not tools for replacing creativity.” Valenzuela emphasized that he comes from an art background. “I went to art school and I started Runway while I was an artist,” he said. “These are tools I wanted to use.” Originally from Chile, Valenzuela came to New York City to attend the Tisch School of the Arts at New York University — where he met his cofounders Anastasis Germanidis and Alejandro Matamala — but soon realized that his artwork was better suited to making tools. “My art was toolmaking, I was eager to see artists using the tools I was making,” he said. “So I went deep into the rabbit hole of neural networks — the idea of computational creativity.” As far as commenting on issues of copyright, fair use and work replacement cited by artists, Valenzuela maintained that it is still “very early” in understanding all the implications of generative AI. “We’re really trying to make sure we can drive this conversation to a positive end,” he said. “I think listening is still the most important aspect. I think being open to change and being able to adapt and understand how things are going to be used, those are the driving factors of how we think about our product. I can’t really speak for other companies and how the other companies are thinking about the space, but for us, we have the commitment to our users.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,157
2,023
"Perception Point launches AI model to combat generative AI-based BEC attacks | VentureBeat"
"https://venturebeat.com/ai/perception-point-launches-ai-model-to-combat-generative-ai-based-bec-attacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Perception Point launches AI model to combat generative AI-based BEC attacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Perception Point , an internet security platform, revealed its latest innovation to counter the rising tide of AI-generated email threats. The company’s new detection technology employs AI-powered large language models (LLMs) and deep learning architecture to identify and thwart business email compromise (BEC) attacks facilitated by generative AI technologies. Criminals are exploiting generative AI technology to carry out sophisticated, precisely targeted attacks against organizations of all sizes. The technology has emerged as a new potent tool for cybercrime, especially in social engineering and BEC attacks, as it enables the creation of high-quality, personalized emails that resemble human output. According to Verizon’s recent data breach investigation report , over 50% of social engineering incidents can be attributed to BEC. Perception Point’s 2023 annual report also reveals an 83% surge in BEC attempts. To address this escalating threat, the company has developed an innovative detection model based on LLMs, which utilize transformers — AI models capable of comprehending the semantic context of the text, similar to renowned LLMs such as OpenAI’s ChatGPT and Google’s Bard. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The solution can therefore identify distinct patterns in LLM-generated text, a crucial factor in detecting and thwarting gen AI-based threats. Beyond legacy security solutions Perception Point asserts that conventional security vendors often fail to achieve the required level of detection accuracy through contextual and behavioral analysis. The company states that while advanced email security systems use contextual and behavioral detection, they still struggle to identify the newly enhanced attacks facilitated by generative AI. This is because these attacks circumvent the typical patterns that the detection methods were originally designed to recognize. Moreover, the company claims that solutions currently available in the market rely solely on post-delivery detection. That means the malicious email can sit in the user’s inbox for an extended period before being removed. “Legacy email security solutions which rely on signatures and reputation analysis struggle to stop even the most basic payload-less BEC attacks,” Tal Zamir, CTO of Perception Point, told VentureBeat. “Our new model’s key strength lies in recognizing the repetition of identifiable patterns in LLM-generated text. The model uses a unique three-phase architecture that detects BEC at the highest detection rates and minimizes false positives.” Zamir said the solution’s distinction lies in its comprehensive scanning of all emails, quarantining those identified as malicious before they reach the user’s inbox. He explained that this proactive approach eliminates the risks and potential damages associated with detection-based methods that rely on identifying and addressing threats once they have infiltrated the system. Additionally, the solution incorporates a managed incident response service, relieving customers’ SOC teams of the responsibility to swiftly respond to incidents and deploy new algorithms in real time to counter novel and emerging threats. Perception Point claims its model exhibits exceptional speed in processing incoming emails, with an average time of 0.06 seconds. The model was initially trained on hundreds of thousands of malicious samples captured by the company and is continuously updated with new data to optimize its effectiveness. Leveraging generative AI to minimize email-based attacks Perception Point’s Zamir said the new attacks include cybercriminals exploiting fake emails to impersonate trusted organizations. Using social engineering techniques, the attackers deceive employees into transferring large sums of money or disclosing confidential data. “Attackers exploit the fact that employees in the modern enterprise are the weakest link in the organization regarding security,” Zamir told VentureBeat. “They are leveraging BEC text-based attacks, which normally do not have malicious payloads such as URLs or malicious files, and thus bypass traditional email security systems, arriving into the users’ inboxes.” He further stated that the emergence of generative AI, specifically LLMs, has given a boost to impersonation, phishing and BEC attacks. This advancement empowers cybercriminals to operate at greater speed and scale than ever before. “Tasks that once required extensive time and effort, such as target research, reconnaissance, copywriting and design, can now be accomplished within minutes using carefully crafted prompts,” said Zamir. “This amplifies the threat by expanding the pool of potential victims and significantly increasing the chances of successful attacks.” To reduce false positives that arise from the extensive use of generative AI for legitimate emails, Perception Point uses a distinctive three-phase architecture in its model. Following an initial scoring process, the model employs transformers and clustering algorithms to categorize email content. By integrating insights from these stages with supplementary data, such as sender reputation and authentication protocol information, the model predicts whether an email is AI-generated and determines if it presents a potential threat. “Our model dynamically scans every email, including the embedded URLs and files, with a patented HAP (Hardware Assisted Platform) detection layer. This is our proprietary next-gen sandbox that dynamically scans content at the CPU/memory level,” said Zamir. What’s next for Perception Point? Zamir said that his company aims to develop AI capabilities to sift through vast amounts of data, identifying potential threats and providing customers with actionable intelligence. He emphasized that integration of generative AI bots into collaboration apps like Slack or Teams, browsers like Edge, and cloud storage services like Google Drive or OneDrive has created new avenues for potential attacks. “Perception Point recognizes these emerging threats, and we are developing AI security solutions designed to prevent, detect and respond to the ever-increasing threat landscape complexity,” said Zamir. “We will continue to ensure that our clients can leverage the power of generative AI without compromising their security posture.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,158
2,023
"Oracle taps generative AI to streamline HR workflows | VentureBeat"
"https://venturebeat.com/ai/oracle-taps-generative-ai-to-streamline-hr-workflows"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Oracle taps generative AI to streamline HR workflows Share on Facebook Share on X Share on LinkedIn Oracle cofounder and chief technology officer Larry Ellison demonstrates Oracle's second-generation cloud infrastructure onstage at the Oracle OpenWorld conference in San Francisco on September 20, 2016. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Oracle Corp. today announced new generative AI features for its Fusion Cloud Human Capital Management (HCM) offering, making it easier for enterprises to automate time-consuming HR workflows and drive productivity. Underpinned by Oracle Cloud Infrastructure (OCI), the new AI capabilities will help with tasks like writing job descriptions for a new role or quickly drafting an employee survey. These capabilities form part of a broader AI shift happening at Oracle and are expected to expand in the coming months. However, Oracle isn’t the only one moving the needle to improve HR workflows with gen AI. Last month, enterprise resource planning (ERP) leader SAP also announced similar capabilities through a partnership with Microsoft. How is Oracle Fusion Cloud HCM getting better? Oracle Fusion Cloud HCM is an end-to-end cloud solution that enables HR teams to manage every stage of the employee lifecycle, from attracting, screening and hiring talent to onboarding, managing payroll and helping with skill development. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Now, taking the platform to the next level, Oracle is adding three generative capabilities into the loop: assisted authoring, suggestions and summarization. With assisted authoring, as the name suggests, users will be able to leverage built-in prompts to generate content and focus on higher-value tasks. For instance, it could be used to write job descriptions, create automated goals covering detailed descriptions and measures for success and generate HR helpdesk articles to help employees. Accelerating day-to-day tasks The suggestions feature also revolves around content but works automatically, providing teams with useful recommendations to accelerate their day-to-day tasks. For example, it could recommend questions for a survey being designed or give advanced career development tips to help managers better groom their workforce. Similarly, summarization helps teams by providing a quick summary of content such as an employee’s performance overview, considering their managers’ feedback, goal progress and achievements. “With the ability to summarize, author and recommend content, generative AI helps to reduce friction as employees complete important HR functions,” said Chris Leone, EVP of applications development for Oracle Cloud HCM. Leone noted that the company has already identified more than 100 high-value scenarios for generative AI in HR and is just getting started. “With the help of customers, who drive approximately 80% of updates to our products, we are continually embedding new use cases that enable organizations to embrace continuous innovation and perpetually improve HR processes and productivity,” he added. Using proprietary data Depending on the use case planned, enterprises can use their proprietary data to refine these generative AI capabilities and make them more suited for specific business needs. This kind of control will not only improve accuracy but also help keep sensitive and proprietary information safe. “Oracle is quickly adding AI capabilities to its business applications, and HCM is a frontrunner with generative AI being able to achieve unprecedented efficiencies for managers, employees, and HR professionals,” said Holger Mueller, principal analyst and VP at Constellation Research. “Oracle has an edge over its HCM competitors thanks to OCI’s superiority running generative AI workloads in the cloud,” Mueller added. “With apps and infrastructure engineered together, Oracle can deliver cheaper, faster, and more integrated processes that create competitive advantage for enterprises.” Notably, the generative AI update for Oracle Fusion Cloud HCM comes just a couple of weeks after the company revealed that it was developing a new cloud service with Toronto-based Cohere to make it easy for enterprises to train their own customized LLMs. The news was shared by Oracle’s founder and CTO Larry Ellison during the company’s fourth-quarter earnings call. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,159
2,023
"Microsoft weaves generative AI fabric for Moody's | VentureBeat"
"https://venturebeat.com/ai/microsoft-weaves-generative-ai-fabric-for-moodys"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft weaves generative AI fabric for Moody’s Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In the world of financial risk assessment, New York based Moody’s is a global powerhouse with a vast wealth of information. Now, thanks to a partnership with Microsoft announced today, Moody’s is bringing the power of generative AI to its enterprise. Moody’s is using the Microsoft Azure OpenAI service as the engine that helps to unlock research information and risk assessment capabilities. Among the first services to be deployed is Moody’s CoPilot, which is an internal tool that will help the company’s 14,000 global employees more easily query and access data and research with the power of large language models (LLMs). Looking beyond just AI, Moody’s is also embracing the Microsoft Fabric data management platform — which was announced last month — in a bid to help better manage data for AI and analytics. “The new generative AI tools will further enhance data and risk management capabilities for our employees and customers,” Nick Reed, CPO at Moody’s, told VentureBeat. “Users will leverage the technology to access tailored risk data and insights drawn from across Moody’s vast body of risk data, analytics and research.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why Moody’s is embracing generative AI now Generative AI has sparked the interest of many firms across multiple industries including financial services. Just last month, JPMorgan revealed that it has plans for a ChatGPT-like investment service. Reed commented that Moody’s has long incorporated traditional AI technologies into its solutions to enable the scale and speed to help customers make informed decisions on risk. He noted that Moody’s began evaluating generative AI when it became evident that rapid advances in the technology would help further harness the power of Moody’s proprietary data, analytics and research to help customers and deliver value in new ways and through new channels. “We believe that generative AI represents a huge opportunity for our customers and our employees to make smarter and more informed decisions,” Reed said. Combining knowledge and opinion with generative AI Reed explained that the Moody’s Copilot combines the company’s corpus of knowledge and opinion with the latest LLMs and Microsoft’s generative AI technology. He added that Moody’s Copilot is currently in its alpha phase, but with the company’s 14,000 employees acting as innovators, his hope is to very rapidly move to beta and beyond — with all answers and use cases grounded by Moody’s proprietary data assets. “Our vision is that users will seamlessly combine insights on formerly disparate risk areas like credit risk, ESG exposure, and supply chain management,” said Reed. “Risks that Moody’s has a wealth of preexisting data and insights into, but that may have existed in separate silos for our users.” All of this proprietary information will be combined with the significantly lower barrier to accessing it that comes with generative AI. Reed said that Moody’s customers and employees will be able to query data using the best of language models. Compliance, security and the need for enterprise AI Bill Borden, corporate VP of worldwide financial services at Microsoft, told VentureBeat that Moody’s has had a long standing relationship with Microsoft. Borden said that Moody’s was looking to benefit from generative AI in a way that integrates with existing processes and could meet the strict security and compliance needs of the company. It’s an approach that is built on a solid foundation at Microsoft. The move to help support financial service firms with generative AI is an extension of the work that Microsoft has been doing in the sector for a long time, according to Borden. He noted that Microsoft has been working to help financial services firms in their digital transformation journey move to the cloud with a very structured approach. It’s an approach that understands how regulations work in different jurisdictions around the world and has the right controls and governance models in place. With generative AI, Borden said that Microsoft has a firm foundation with responsible AI, which is part of the platform. He also noted that Copilot services that firms like Moody’s are building for their own enterprise data are built on the same infrastructure that Microsoft uses to build its own suite of Copilot services. Why Moody’s is using Microsoft Fabric Moody’s isn’t just using Microsoft Azure OpenAI service to build out its generative AI capabilities — it’s also using the recently announced Microsoft Fabric data technology. Reed said that Microsoft Fabric allows Moody’s users to simplify how they are able to view and analyze data by bringing it all together. “Moody’s has a vast set of proprietary risk data across areas including credit, ESG, commercial real estate, supply chain and much more,” Reed said. “We are evaluating the full set of possible use cases with Fabric to determine our full course of action.” The idea of having a data lake is nothing new for a company like Moody’s. Borden said that data lakes are prevalent across banking, capital markets and insurance companies. With Fabric, Borden said the idea is to have an environment that can source data together from multiple sources and provide governance, data catalog and insights. “Fabric is the integration of our horizontal data platform capabilities,” said Borden. “We’re combining those things together to make it much more consumable for our customers to actually help them with their data strategies.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,160
2,023
"Inside the race to build an ‘operating system’ for generative AI | VentureBeat"
"https://venturebeat.com/ai/inside-the-race-to-build-an-operating-system-for-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Inside the race to build an ‘operating system’ for generative AI Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Generative AI, the technology that can auto-generate anything from text, to images, to full application code, is reshaping the business world. It promises to unlock new sources of value and innovation, potentially adding $4.4 trillion to the global economy, according to a recent report by McKinsey. But for many enterprises, the journey to harness generative AI is just beginning. They face daunting challenges in transforming their processes, systems and cultures to embrace this new paradigm. And they need to act fast, before their competitors gain an edge. One of the biggest hurdles is how to orchestrate the complex interactions between generative AI applications and other enterprise assets. These applications, powered by large language models (LLMs), are capable not only of generating content and responses, but of making autonomous decisions that affect the entire organization. They need a new kind of infrastructure that can support their intelligence and autonomy. Ashok Srivastava, chief data officer of Intuit , a company that has been using LLMs for years in the accounting and tax industries, told VentureBeat in an extensive interview that this infrastructure could be likened to an operating system for generative AI: “Think of a real operating system, like MacOS or Windows,” he said, referring to assistant, management and monitoring capabilities. Similarly, LLMs need a way to coordinate their actions and access the resources they need. “I think this is a revolutionary idea,” Srivastava said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The operating-system analogy helps to illustrate the magnitude of the change that generative AI is bringing to enterprises. It is not just about adding a new layer of software tools and frameworks on top of existing systems. It is also about giving the system the authority and agency to run its own process, for example deciding which LLM to use in real time to answer a user’s question, and when to hand off the conversation to a human expert. In other words, an AI managing an AI, according to Intuit’s Srivastava. Finally, it’s about allowing developers to leverage LLMs to rapidly build generative AI applications. This is similar to the way operating systems revolutionized computing by abstracting away the low-level details and enabling users to perform complex tasks with ease. Enterprises need to do the same for generative AI app development. Microsoft CEO Satya Nadella recently compared this transition to the shift from steam engines to electric power. “You couldn’t just put the electric motor where the steam engine was and leave everything else the same, you had to rewire the entire factory,” he told Wired. What does it take to build an operating system for generative AI? According to Intuit’s Srivastava, there are four main layers that enterprises need to consider. First, there is the data layer, which ensures that the company has a unified and accessible data system. This includes having a knowledge base that contains all the relevant information about the company’s domain, such as — for Intuit — tax code and accounting rules. It also includes having a data governance process that protects customer privacy and complies with regulations. Second, there is the development layer, which provides a consistent and standardized way for employees to create and deploy generative AI applications. Intuit calls this GenStudio , a platform that offers templates, frameworks, models and libraries for LLM app development. It also includes tools for prompt design and testing of LLMs, as well as safeguards and governance rules to mitigate potential risks. The goal is to streamline and standardize the development process, and to enable faster and easier scaling. Third, there is the runtime layer, which enables LLMs to learn and improve autonomously, to optimize their performance and cost, and to leverage enterprise data. This is the most exciting and innovative area, Srivastava said. Here new open frameworks like LangChain are leading the way. LangChain provides an interface where developers can pull in LLMs through APIs, and connect them with data sources and tools. It can chain multiple LLMs together, and specify when to use one model versus another. Fourth, there is the user experience layer, which delivers value and satisfaction to the customers who interact with the generative AI applications. This includes designing user interfaces that are consistent, intuitive and engaging. It also includes monitoring user feedback and behavior, and adjusting the LLM outputs accordingly. Intuit recently announced a platform that encompasses all these layers, called GenOS, making it one of the first companies to embrace a full-fledged gen OS for its business. The news got limited attention, partly because the platform is mostly internal to Intuit and not open to outside developers. How are other companies competing in the generative AI space? While enterprises like Intuit are building their own gen OS platforms internally, there is also a vibrant and dynamic ecosystem of open software frameworks and platforms that are advancing the state of the art of LLMs. These frameworks and platforms are enabling enterprise developers to create more intelligent and autonomous generative AI applications for various domains. One key trend: Developers are piggy-backing on the hard work of a few companies that have built out so-called foundational LLMs. These developers are finding ways to affordably leverage and improve those foundational LLMs, which have already been trained on massive amounts of data and billions of parameters by other organizations, at significant expense. These models, such as OpenAI’s GPT-4 or Google’s PaLM 2, are called foundational LLMs because they provide a general-purpose foundation for generative AI. However, they also have some limitations and trade-offs, depending on the type and quality of data they are trained on, and the task they are designed for. For example, some models focus on text-to-text generation, while others focus on text-to-image generation. Some do better at summarization, while others are better at classification tasks. Developers can access these foundational large language models through APIs and integrate them into their existing infrastructure. But they can also customize them for their specific needs and goals, by using techniques such as fine-tuning, domain adaptation and data augmentation. These techniques allow developers to optimize the LLMs’ performance and accuracy for their target domain or task, by using additional data or parameters that are relevant to their context. For example, a developer who wants to create a generative AI application for accounting can fine-tune an LLM model with accounting data and rules, to make it more knowledgeable and reliable in that domain. Another way that developers are enhancing the intelligence and autonomy of LLMs is by using frameworks that allow them to query both structured and unstructured data sources, depending on the user’s input or context. For example, if a user asks for specific company accounting data for the month of June, the framework can direct the LLM to query an internal SQL database or API, and generate a response based on the data. Unstructured data sources, such as text or images, require a different approach. Developers use embeddings, which are representations of the semantic relationships between data points, to convert unstructured data into formats that can be processed efficiently by LLMs. Embeddings are stored in vector databases , which are one of the hottest areas of investment right now. One company, Pinecone , has raised over $100 million in funding at a valuation of at least $750 million, thanks to its compatibility with data lakehouse technologies like Databricks. Tim Tully, former CTO of data monitoring company Splunk, who is now an investor at Menlo Ventures , invested in Pinecone after seeing the enterprise surge toward the technology. “That’s why you have 100 companies popping up trying to do vector embeddings,” he told VentureBeat. “That’s the way the world is headed,” he said. Other companies in this space include Zilliz, Weaviate and Chroma. What are the next steps toward enterprise LLM intelligence? To be sure, the big-model leaders, like OpenAI and Google, are working on loading intelligence into their models from the get-go, so that enterprise developers can rely on their APIs, and avoid having to build proprietary LLMs themselves. Google’s Bard chatbot, based on Google’s PaLM LLM, has introduced something called implicit code execution , for example, that identifies prompts that indicate a user needs an answer to a complex math problem. Bard identifies this, and generates code to solve the problem using a calculator. OpenAI, meanwhile, introduced function calling and plugins , which are similar in they can turn natural language into API calls or database queries, so that if a user asks a chatbot about stock performance, the bot can return accurate stock information from relevant databases needed to answer the question. Still, these models can only be so all-encompassing, and since they’re closed they can’t be fine-tuned for specific enterprise purposes. Enterprise companies like Intuit have the resources to fine-tune existing foundational models, or even build their own models, specialized around tasks where Intuit has a competitive edge — for example with its extensive accounting data or tax code knowledge base. Intuit and other leading developers are now moving to new ground, experimenting with self-guided, automated LLM “agents” that are even smarter. These agents use what is called the context window within LLMs to remember where they are in fulfilling tasks, essentially using their own scratchpad and reflecting after each step. For example, if a user wants a plan to close the monthly accounting books by a certain date, the automated agent can list out the discrete tasks needed to do this, and then work through those individual tasks without asking for help. One popular open-source automated agent, AutoGPT , rocketed to more than 140,000 stars on Github. Intuit, meanwhile, has built its own agent, GenOrchestrator. It supports hundreds of plugins and meets Intuit’s accuracy requirements. The future of generative AI is here The race to build an operating system for generative AI is not just a technical challenge, but a strategic one. Enterprises that can master this new paradigm will gain a significant advantage over their rivals, and will be able to deliver more value and innovation to their customers. They arguably will also be able to attract and retain the best talent, as developers will flock to work on the most cutting-edge and impactful generative AI applications. Intuit is one of the pioneers and is now reaping the benefits of its foresight and vision, as it is able to create and deploy generative AI applications at scale and with speed. Last year, even before it brought some of these OS pieces together, Intuit says it saved a million hours in customer call time using LLMs. Most other companies will be a lot slower, because they’re only now putting the first layer — the data layer — in place. The challenge of putting the next layers in place will be at the center of VB Transform , a networking event on July 11 and 12 in San Francisco. The event focuses on the enterprise generative AI agenda, and presents a unique opportunity for enterprise tech executives to learn from each other and from the industry experts, innovators and leaders who are shaping the future of business and technology. Intuit’s Srivastava has been invited to discuss the burgeoning GenOS and its trajectory. Other speakers and attendees include executives from McDonalds, Walmart, Citi, Mastercard, Hyatt, Kaiser Permanente, CapitalOne, Verizon and more. Representatives from large vendors will be present too, including Amazon’s Matt Wood, VP of product, Google’s Gerrit Kazmaier, VP and GM, data and analytics, and Naveen Rao, CEO of MosaicML, which helps enterprise companies build their own LLMs and just got acquired by Databricks for $1.3 billion. The conference will also showcase emerging companies and their products, with investors like Sequoia’s Laura Reeder and Menlo’s Tim Tully providing feedback. I’m excited about the event because it’s one of the first independent conferences to focus on the enterprise case of generative AI. We look forward to the conversation. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,161
2,023
"Inflection AI sets off fireworks with $1.3B funding, highlighting surging interest in LLMs (and Nvidia H100s) | VentureBeat"
"https://venturebeat.com/ai/inflection-ai-sets-off-fireworks-with-1-3-billion-funding-highlighting-power-of-llms-and-nvidia-h100s"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Inflection AI sets off fireworks with $1.3B funding, highlighting surging interest in LLMs (and Nvidia H100s) Share on Facebook Share on X Share on LinkedIn image by Canva Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In a pre-July 4th weekend surprise, Inflection AI , the Palo Alto-based startup founded by Mustafa Suleyman, cofounder of DeepMind, and LinkedIn co-founder Reed Hoffman, announced that it has raised $1.3 billion in an eye-popping round that brings its valuation to $4 billion. Forbes reported that Microsoft and Nvidia led the round, along with Hoffman, Microsoft cofounder Bill Gates and former Google CEO Eric Schmidt. Nvidia was the only new investor — which is notable given that Nvidia and its service provider CoreWeave worked with Inflection to develop Inflection’s current H100 cluster, and Inflection worked with Nvidia to help fine-tune models on a recent MLPerf test that set records on current AI model training benchmarks. The Forbes report also said Nvidia and CoreWeave are now helping Inflection install a cluster that will consist of a whopping 22,000 H100s — which Inflection believes to be the largest GPU cluster for AI applications in the world (even ahead of Meta’s 16,000 GPU cluster announced in May). Does surging investor interest signal an AI bubble? Not surprisingly, the surging investor interest in powerful LLMs to create “personal” chatbots has some already chattering about an AI bubble. Nic Carter, a general partner at Castle Island Ventures, said, “AI is making the crypto bubble look like child’s play.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI making the crypto bubble look like child's play the amount of cash that is being splashed in AI – amidst a virtual seize in venture financing, still closed IPO window, sustained high rates – is absolutely blowing my mind https://t.co/bVysGMnz17 Inflection AI raised $225 million when it launched one year ago Inflection, which is only a year old, made eyes water right from the beginning, when it announced it had launched and already raised $225 million with plans to use AI to “generate language to pretty much human-level performance.” And late on a Friday afternoon in March, the Financial Times reported that Suleyman and Hoffman were seeking up to $675 million in funding, even though they had yet to release a product. That quickly changed: In May, the company launched Pi , which it said was named for “personal intelligence” and was meant to be “empathetic, useful and safe” — that is, acting more personally and colloquially than OpenAI’s GPT-4, Microsoft’s Bing or Google’s Bard, while not veering into the super-creepy. During a panel last week at the Bloomberg Technology Summit, Hoffman said that the Pi chatbot takes a more personal, emotional approach compared with ChatGPT. “IQ is not the only thing that matters here,” he said. “EQ matters as well.” Last week, Inflection also announced that it would release a new LLM to power Pi, called Inflection -1 , which it said outperforms OpenAI’s GPT-3.5. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,162
2,023
"Generative AI startup Typeface raises $100M to customize enterprise content | VentureBeat"
"https://venturebeat.com/ai/generative-ai-startup-typeface-raises-100m-customize-enterprise-content-creation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Generative AI startup Typeface raises $100M to customize enterprise content creation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Make no mistake about it: There is a lot of hype and a lot of money in play in the generative AI land grab. Today, San Francisco-based startup Typeface announced it has raised $100 million in new funding to help expand its go-to-market efforts as the company builds out generative AI content services for enterprises. The triple-digit fund raise is particularly noteworthy as the startup only exited stealth in February, alongside $65 million in funding. Earlier this month Typeface expanded its customized generative AI approach with a Google Cloud partnership. The company has also added partnerships with Microsoft and Salesforce in recent weeks, further expanding its reach. Former Adobe CTO Abhay Parasnis leads the startup, which aims to empower prominent brands across diverse industries with the capabilities of generative AI. Typeface helps enterprises create content at scale using AI-generated text and images, with machine learning (ML) training that has been customized on an organization’s content. Recognizing the limitations of generalized large language models (LLMs) in meeting specific brands’ requirements, the company seeks to bridge the gap. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “From Typeface’s perspective, the funding news underscores the broader trend of generative AI coming into focus for enterprise customers,” Parasnis, founder and CEO of Typeface, told VentureBeat. “Businesses are starting to really look at this [generative AI], not just as a cool technology with some demos, but rather are thinking about how it is going to actually materially change the businesses and transform workflows.” Generative AI for the enterprise is all about workflow While it’s still early days for gen AI, Parasnis said Typeface is already seeing significant growth in customers signing commercial contracts and in revenue. The company plans to use its new funding to accelerate product innovation around multi-modal generative pipelines and reimagining enterprise workflows in areas like marketing, HR and customer support. “I think generative AI innovation is going to switch from platform innovation to workflow innovation,” Parasnis said. As such, instead of organizations thinking about generative AI as a generic tool to generate content, the focus for Typeface is on helping enterprises with specific workflows optimize their business processes. Parasnis said that one of Typeface’s customers, for example, is using the technology to completely reimagine how all its employee communication happens. That includes workflows for generating LinkedIn job postings, employee communications and even payroll reports. “That’s not what you would have thought six months ago about what generative AI could transform, but it is going to transform many enterprise workflows,” he said. Understanding the gen AI maturity model While there is no shortage of excitement around generative AI, Parasnis emphasized that not all enterprises are jumping on the bandwagon. Parasnis is no stranger to the world of enterprise IT and how technology is adopted. He noted that even with the transition to cloud computing , which has been ongoing for a decade, not all enterprise workloads have moved to the cloud. In fact, many workloads continue to remain on-premises. He expects the transition to generative AI to follow a similar pattern, with different stages of adoption for different industries. To help enterprises understand how generative AI can be adopted, Typeface has developed its own gen AI maturity model. The idea behind the model is to take a consultative approach that helps enterprise IT leaders understand AI and, specifically, how generative AI can change workflows. “If you produce a certain amount of content today, using generative AI solutions like Typeface the enterprise can get significantly more content produced while still preserving brand voice and personalization,” Parasnis said. Describing what the company calls the “10x content factory,” he explained that “we define some very specific metrics for customers around investing in generative AI and how they measure it through the lens of more content produced that’s still on brand.” Parasnis commented that for enterprises, adoption of generative AI is not just about technology. Rather he emphasized that new technology adoption is about process, culture and organizational change that have to be combined with the technology. “Sometimes sitting in Silicon Valley, we have a tendency to think these transitions are going to happen much faster than they actually do,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,163
2,023
"External AI R&D labs are becoming a competitive advantage for innovation | VentureBeat"
"https://venturebeat.com/ai/external-ai-rd-labs-are-becoming-a-competitive-advantage-for-innovation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored External AI R&D labs are becoming a competitive advantage for innovation Share on Facebook Share on X Share on LinkedIn Presented by DataRoot Labs Developing an industry-transforming AI solution is an incredible competitive advantage — but pursuing that goal requires real business transformation, along with a significant investment in talent and resources. Major challenges remain: the skills gap between demand for these very specialized roles and the actual pool of available researchers is still vast. And real AI innovation in business comes from a very particular combination of fundamental and applied research, the difference between pure science and goal-oriented, solution-focused work. An AI R&D (research and development) lab is purpose-built for this challenge, merging the best of academic research with industry-centered goals, in a team with truly multidisciplinary AI skills, concentrated on a business partner’s specific AI objectives, says Max Frolov, CEO and co-founder of DataRoot Labs. “AI R&D merges the real-world requirements of the technology sector, with a particular emphasis on pioneering new technology methods, and practical application technology concerns, and offers experimental ways to solve unsolved challenges,” Frolov explains. “AI R&D data scientists are dedicated to working on the big challenges — the ones that are changing the landscape, pushing boundaries, and ultimately developing real IP that adds a competitive advantage and results in a higher valuation of the business.” Almost every company in Silicon Valley has their own AI R&D lab, whether internal or offshore — Frolov explains that companies like Amazon, Grammarly, Ring, Snapchat and DataRobot have already established an R&D presence in Ukraine, while the majority of the enterprise market intensely focused on internal AI R&D experimentation. However, an external AI R&D can extend the frontiers of a company’s technology, no matter what size it is, and futureproof the business in a fraction of the time, resources and dollar cost of building an internal team. Here’s a look at how those external AI R&D centers function, how they can help companies of any size develop innovative products and technologies that can transform markets, and how to establish a successful, lucrative partnership. How R&D centers work An AI R&D lab is an interdisciplinary team of machine learning (M&L) researchers and engineers, MLOps and natural language processing (NLP) professionals. A partnership offers a collaborative, agile and affordable way to co-build next-generation products and services with expected, but sometimes unknown or surprising, outcomes. It’s able to apply relevant state-of-the-art AI solutions to new questions. Instead of being limited to working solely on product plans, researchers have the freedom to conduct experiments and delve into uncharted territory. They’re particularly effective when focusing on a niche area, whether that’s a specific AI technology like computer vision or generative AI, or industry- and market-specific questions. An R&D lab can operate as an extension of a company’s core team, or as its primary AI partner, offering companies a collaborative development process for long-term projects that focus on problems demanding very deep AI expertise. A lab will often tap the expertise of local universities and other players in the ecosystem in order to ensure they’re up to date on the latest research, best practices and strategies. “It’s very different from the consulting model, where you tap into high-level expertise and are charged a lot of money per hour for big-picture thoughts,” Frolov says. “With an R&D center, you have a team that works continuously on your projects to crack open unresolved tech challenges and deliver results.” The work typically starts with analyzing the technical task and laying out the market and research landscape around the client. Architecture planning, and then sourcing and analyzing data from the client and elsewhere comes next, in order to train the model and test their hypothesis. The hypothesis aims to answer two questions: Can the technologies we have today solve this particular problem, and if so, how can it be done? From there, the team develops a minimum viable product, and work continuously to improve the initial model. “The important thing about the AI R&D setup is that although each client has their own dedicated team, it doesn’t prevent the overall lab from talking to each other and tapping into disparate teams of experts inside the lab,” Frolov adds. “If anyone is working on a task that requires, say, MLOps expertise, those teams talk to each other, do cross-checks and exchange knowledge. That results in more innovation to push the boundaries of what’s possible today.” The advantages of an external AI R&D facility A key advantage to an external AI R&D facility is solving the resource challenges that the industry faces right now. With the financial investment required and competition for a very limited pool of experts, a lack of in-house resources continues to be a critical obstacle to launching an AI strategy. “And no matter how much you’re willing to pay, there’s still a shortage,” Frolov says. “If you set up your AI R&D center somewhere offshore like Ukraine, not only can you tap into a vast and varied engineering pool outside your usual boundaries, but that engineering pool is typically far more balanced in terms of seniority level and comfortable for R&D team pricing .” It’s also a far more stable source of talent, he adds. In-house engineers frequently leave jobs in which they feel stifled or unfulfilled, or because there’s a better opportunity elsewhere. Poaching, of course, is not uncommon. But in a lab, they’re in a tech-focused, entrepreneurial research-based environment that offers true collaboration. And they’re working on big, industry-shaking challenges, where they can actually develop their skills, expand their expertise and wealth of knowledge and advance their career while working on thought-provoking projects. For the clients, that means a long-term R&D partner, with engineers that you know well and trust, who are dedicated to your cause. You can create a long-term relationship committed to advancing and continuously iterating on your AI goals. Tapping offshore talent One of the best reasons to go outside Silicon Valley specifically is the cost. For instance, the salaries for engineers in central and eastern Europe, India and South America are significantly lower compared to North America, so the cost of development is significantly lower. These countries are also investing heavily in their talent pool, not only partnering with universities, private and public research organizations and the like, but supporting the next generation of skilled data scientists. For instance, in a location like Ukraine you’re tapping into a large technical talent pool, but it’s not only about the numbers, it’s about the quality of engineers. The country has a long legacy of scientific education, Frolov says. “We collaborate with the best local universities that offer tech education, including Kyiv Polytechnic Institute, the MIT of Ukraine,” he says. “With Kyiv Polytechnic we’re establishing a master’s program in AI. And our own free online school, DataRoot University , currently has about 6,000 students registered.” DataRoot assists students with the practical application of their technical knowledge as they advance. In teams of three to five people, students work on AI startup project ideas for six months, with an assist from DataRoot, which will support successful projects into completion. “For us it’s a mission — to pay it forward and grow the field,” Frolov says. “And our goal as a company is not only to push the boundaries of technology, but also to put Ukraine on the map of the AI ecosystem out there. But one of the top reasons to hire in Ukraine now is that by working with Ukraine you help a country that’s been invaded by a hostile nation — but is still working on creating tomorrow.” Learn more here h ow AI R&D-as-a -service can boost innovation to seize competitive advantage. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,164
2,023
"Causely launches Causal AI for Kubernetes, raises $8.8M in seed funding | VentureBeat"
"https://venturebeat.com/ai/causely-launches-causal-ai-for-kubernetes-raises-8-8m-in-seed-funding"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Causely launches Causal AI for Kubernetes, raises $8.8M in seed funding Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Causely , an artificial intelligence startup led by CEO and founder Ellen Rubin, announced today the limited early-access launch of its Causal AI platform for enterprise data. The company aims to revolutionize how businesses troubleshoot operational issues and manage application performance using Causal AI technology. The company also announced today that it has raised $8.8 million in seed funding led by 645 Ventures , with participation from founding investor Amity Ventures, and including new investors GlassWing Ventures and Tau Ventures. The funding will enable Causely to build its Causal AI platform for IT and expand its offerings to a wider range of IT problems and scenarios. The financing also brings the company’s total funding to over $11 million since it was founded in 2022. In an exclusive interview with VentureBeat, Rubin said: “We feel like there’s a lot of pain, there’s so much complexity, there are so many … thousand, or [maybe] even millions of interrelationships between the different microservices and all of the different components of the technology stack. And so there’s a lot of room for confusion and painful troubleshooting across different people and teams.” Causal AI for IT operations Causely is entering a crowded market of observability and monitoring tools for cloud-native applications, such as those from DataDog , New Relic , Splunk and others. However, Causely claims to have a unique value proposition and differentiation by focusing on causality, not correlation, and capturing it in software in an automated way. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The early access program for Causely’s initial service is open to a limited number of DevOps and SRE users who are building and supporting apps in Kubernetes. The program will allow them to try Causely’s platform in their own environment and provide feedback and iteration as the company moves towards a minimum viable product and launch. “We’re really the first team that is focused on the causality problems, and going right to the heart of it,” Rubin said. “And this idea of causal AI, which is still an emerging part of the AI world, we are uniquely focused on it.” Rubin also said that Causely’s platform is not limited to Kubernetes environments, but can be applied to many other IT problems and scenarios that require automated detection and remediation. “We see opportunities in many areas that could include things as widely distributed as more business continuity challenges, security challenges, edge computing, IoT,” Rubin said. “These are all problem areas that we feel that we could address as well with the same core technology.” Better cloud application management The seed round highlights the growing interest in startups applying AI to improve IT operations and the market opportunity for these types of platforms. According to Aaron Holiday, cofounder and managing partner at lead investor 645 Ventures, Causley is creating a new category with Causal AI. “Causley has the potential to bring forth a new generation or the next generation of what data observability will be when you marry it with that type of AI,” Holiday said. With its initial funding and its experienced team, Causley appears well-positioned to gain traction, but it faces risks common to early-stage startups, including finding product–market fit and early customer adoption. The company’s progress over the next year will demonstrate whether its Causal AI approach can solve the complex challenges of managing modern cloud applications and win over enterprise customers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,165
2,023
"Capital One's new chief scientist says 'responsible, thoughtful' generative AI is key | VentureBeat"
"https://venturebeat.com/ai/capital-ones-new-chief-scientist-says-responsible-thoughtful-generative-ai-is-key"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Capital One’s new chief scientist says ‘responsible, thoughtful’ generative AI is key Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. When Prem Natarajan, Capital One’s new chief scientist and head of enterprise AI, came on board in May — after five years as a VP at Amazon, leading the Alexa AI organization — it was because he was intrigued. What was in the DNA, he wondered, of one of the largest banks in the U.S. and one with a reputation for a strong technology focus, that could help it succeed in implementing generative AI and large language models (LLMs) in a responsible, thoughtful way? “Capital One was emerging in so many conversations as a big, forward-leaning investor in technology that was one of the first major companies to go all in on the cloud,” Natarajan told VentureBeat in a recent interview. Capital One “offered me a great balance for [the] next phase [of my career] — to contribute using my expertise but to learn about the new challenges that lie at the intersection of [generative AI] and the new set of customer and business problems.” Natarajan, who leads technology strategy, architecture and development for Capital One’s enterprise data, analytics and machine learning initiatives, said the generative AI opportunity for enterprises is substantial — but far more so for organizations that have already committed to a technology transformation. “They’re the ones that will be at the forefront of this,” he said. “People who suddenly wake up and say oh, this is cool, they may not be in the best position to harness it as pervasively as we can.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A shift from AI research to industry Natarajan spent his early career squarely in research, at Raytheon BBN — a company best known for its DARPA -sponsored research (DARPA is an agency of the U.S. Department of Defense, responsible for the development of emerging technologies for use by the military). After that came a long stint at the University of Southern California (where he still has a faculty appointment), serving as a vice dean of engineering and as the executive director of its Information Sciences Institute. But then, he said, he starting noticing a change. For the first time, he explained, the center of gravity was steadily shifting, year over year, from academia to industry. “I realized that this is where a lot of the new advances in AI were going to happen, because there was a potent combination of a lot of data — from serving so many customers on search, social media and ecommerce — and compute,” he said. “I thought maybe I should go spend some time in industry — to take a peek into what’s going on.” Capital One is the ‘kind of bank a technology company would build’ After a few years at Amazon, Natarajan said he kept thinking about the verticals that really shape people’s lives — like healthcare, education and, not surprisingly, finance. What he saw in Capital One, he explained, is the “kind of bank a technology company would build.” “When I look at the size of the technology workforce here — 12,000-plus people — and I look at the quality of the people I’m interacting with, this is certainly a technology company, at least in some sense,” he explained. But Capital One, of course, also operates as a bank, with all the regulatory and compliance considerations that are necessary. To tackle that potent combination of technology and risk/compliance, he said, the organization requires a new operating model that scales. (Capital One executives will be speaking at VentureBeat’s upcoming Transform event on July 11 & 12 in San Francisco, which focuses on the power of generative AI. Natarajan is also serving as a member of the AI Innovation Awards committee.) “When I talk about Capital One being ready [for generative AI], it’s not just that they have the artifacts or this expertise — in addition to the size of the investment, there’s also the maturity in how to operate the technology workforce that sets us apart,” he said. Moving to the cloud means totally re-architecting the data environment, he explained. “These are not small tasks, these are multi-year journeys,” he said. “We are so many years into what is a required part of the ML and AI journey.” Implementing AI at Capital One in a ‘responsible, thoughtful’ way Of course, Capital One is a bank, first and foremost, albeit one that is technologically advanced. And Natarajan emphasized that regardless of the sector, “there is a deep imperative to operate all of this in a responsible , thoughtful way — even more so for an organization like us, that is more ready technologically than most.” For the longest time, he said, AI was about testing — such as having the right benchmarks. But now, Capital One has to take an inclusive AI approach right from the design phases of its applications. “So do we have diverse perspectives represented? Are we challenging ourselves to think about the different outcomes?” he asked. Banks, he pointed out, have had to think that through for other parts of their business processes for decades. Now, he believes that Capital One has a “natural strength” to bring in multi-dimensional thinking and examine the different ways issues could manifest, from the design and implementation phases to the testing and ongoing refinement and improvements. “We’re building applications that should serve the maximum number of people in equally performant ways,” he said. “To me, that’s the essence of a responsible portfolio.” Others can put together something that works, he explained, but it is essential to think through the guardrails and safeguards. A ‘learning phase’ for generative AI at Capital One Even a company like Capital One is going through a learning and experimenting phase with generative AI and LLMs, Natarajan cautioned. “Everybody acknowledges, across every industry, that they are learning,” he said. “Everybody is exploring.” For Capital One, customer service is certainly an early application contender. “But even there, we have to go through the process to make sure it actually works,” he said. “How does it improve the employee or customer experience?” Natarajan said his top priority at the moment is to continue building a “world-class” AI organization. “We have the framework, we already have a fair number of AI and ML people,” he said. “I want us to be the top destination for the top AI talent that is interested in these problems. I think that’s what will prepare us most for the future.” He added that he is inspired by the company’s 100 million-plus customers. “How can this world-class organization that we build accelerate the delivery of new experiences, differentiated experiences that make everybody’s lives that much easier?” he asked. “Capital One already has a strong data and technology-oriented culture — but everything can be strengthened, especially as we introduce new disciplines.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,166
2,023
"Announcing the 5th annual VentureBeat AI Innovation Awards at VB Transform 2023 | VentureBeat"
"https://venturebeat.com/ai/announcing-the-5th-annual-venturebeat-ai-innovation-awards-at-transform-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Announcing the 5th annual VentureBeat AI Innovation Awards at VB Transform 2023 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As generative artificial intelligence (AI) has rapidly emerged as a transformative force across numerous industries, shaping the way we interact, create and innovate, VentureBeat returns on July 11 with its annual flagship event, VB Transform , which will focus this year on getting ahead of the generative AI revolution. Transform, will be a two-day in-person event, July 11 and 12, featuring industry experts and peers coming together to provide comprehensive insights and best practices on the data journey of enterprises. As an added bonus, participants will have numerous opportunities to forge meaningful connections and expand their networks. At the July 12 in-person event at San Francisco’s Marriott Marquis, VentureBeat will recognize and award enterprise, innovative, visionary and inclusivity initiatives through our fifth annual VB AI Innovation Awards. The nominees are drawn from our daily editorial coverage and the expertise, knowledge and experience of our nominating committee members. Prepare to witness the trailblazers and game-changers in the realm of generative AI take center stage as we recognize their outstanding contributions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Introducing the 2023 AI Innovation Awards nominating committee Matei Zaharia, cofounder and CTO at Databricks Matei Zaharia is a cofounder and Chief Technologist at Databricks as well as an Assistant Professor of Computer Science at Stanford University. He started the Apache Spark project during his PhD at U.C. Berkeley in 2009, and has worked broadly on other widely used data and AI software, including MLflow, Delta Lake, Dolly and ColBERT. He works on a wide variety of projects in data management and machine learning at Databricks and Stanford. Matei’s research was recognized through the 2014 ACM Doctoral Dissertation Award, an NSF CAREER Award, and the U.S. Presidential Early Career Award for Scientists and Engineers (PECASE). Tonya Custis, director of AI research, Autodesk Tonya Custis leads the Autodesk AI Lab, a team that does fundamental and applied AI Research, primarily in generative AI and deep learning for CAD geometry. She has over 15 years of experience in performing AI research and leading AI research teams and projects at Autodesk, Thomson Reuters, Honeywell and eBay. She has a Ph.D. in Linguistics, an M.S. in Computer Science, an M.A. in Linguistics, and a B.A. in Music. Curtis is a returning member of the nomination committee. “I’m thrilled to be part of VentureBeat’s Transform nomination cohort because my team and I are on the frontlines of developing the technologies that will enable our customers’ workflows for the future, and we need to ensure that we’re in sync and up to date on the latest in this space,” she told VentureBeat. “Above all, we want to ensure we provide the right expertise and solutions that are both trusted and valuable.” Di Mayze, global head of data and AI, WPP Di Mayze has over 20 years of technology and data experience across media, FMCG, finance and retail, offering consulting for companies such as Hearst UK, dunnhumby & Walgreens Boots Alliance. She joined WPP in 2014 as MD of Acceleration (part of Wunderman Thompson) and left in 2017 to become a freelance data strategy consultant for Wavemaker, VML, Geometry, Wunderman Thompson and MediaCom. In January 2020, Di joined the OPEN team and became global head of data and AI for WPP. Mayze is also a returning member of the nomination committee. Prem Natarajan, Chief Scientist, Head of Enterprise AI at Capital One Prem Natarajan, Ph.D., is chief scientist, head of Enterprise AI at Capital One, where he leads the technology strategy, architecture, and development for Capital One’s enterprise data, analytics, and machine learning initiatives, including advancing its AI capabilities, tools, and research efforts. Prem previously led Amazon’s Alexa AI organization and brings more than two decades of experience leading science, technology, and commercialization efforts in natural language processing, speech recognition, computer vision, forecasting, and other machine learning applications. “I’m honored to serve on the nominating committee for this year’s AI Innovation Awards. We are at an incredibly exciting and historically important inflection point in the advancement of AI. When executed responsibly and effectively, AI has the potential to beneficially transform every aspect of our professional and personal lives — from how we develop code and applications, to how we discover and consume information, to how we make it easier for everyone to interact with systems and even with our environments,” Natarajan told VentureBeat. “Today more than ever, it’s important that we identify and elevate the leading thinkers and innovators making meaningful contributions to this field, and I am delighted to support VentureBeat’s efforts to do just that.” Kalyan Veeramachaneni, principal research scientist at MIT College of Computing Kalyan Veeramachaneni is a co-founder of DataCebo , a commercial product based on the Synthetic Data Vault (SDV). Veeramachaneni is also a principal research scientist at the MIT Schwarzman College of Computing. In 2015, he founded MIT’s Data-to-AI (DAI) Lab (part of MIT LIDS ) where he directs a team that builds technologies that enable development, validation, and deployment of AI applications developed using data. Veeramachaneni has previously founded two other AI start-ups: Feature Labs , a data science automation company that enabled enterprises to create machine learning models from their data with automated feature engineering, and PatternEx , an AI cybersecurity company. AI Innovation Awards categories Generative AI Innovator of the Year This award will go to the company that has pushed the boundaries of generative AI the furthest in the past year and demonstrated the most innovative use of the technology. The winner will have created an application, platform or service that showcases the vast potential of generative AI in a creative, impactful way. Best Enterprise Implementation of Generative AI This award will highlight the top enterprise company that has implemented generative AI technology in a truly transformative way. Most Promising Generative AI Startup This award will go to the most promising startup that has developed an innovative generative AI application and demonstrated high growth potential, but has raised less than $30 million in funding. Generative AI Visionary This award will go to an individual who has made significant contributions to the field of generative AI through their thought leadership, research, or work building foundational technologies. The winner would be judged based on the novelty and influence of their contributions, as evidenced by publications, patents, or products developed. Generative AI Diversity & Inclusion This award will recognize the company, organization or individual that has done the most to promote diversity and inclusion in the generative AI field. This could include advancing AI ethics, making AI technologies more accessible, providing opportunities and support for underrepresented groups, or using AI in a way that reduces bias and promotes social justice. Generative AI Open Source Contribution This award highlights the person, team or company that has made the most significant contribution to open source tools, datasets, or other resources to help advance generative AI. Counting down to the AI Innovation Award We look forward to sharing a list of final AI Innovation Award nominees at the start of July, as well as editorial and social media coverage of nominees and winners. Awards will be presented at VentureBeat’s in-person event at Transform on July 11 in San Francisco. Stay tuned! VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,167
2,023
"AI Foundation launches AI.XYZ to give people their own AI assistants | VentureBeat"
"https://venturebeat.com/ai/ai-foundation-launches-ai-xyz-to-give-people-their-own-ai-assistants"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Foundation launches AI.XYZ to give people their own AI assistants Share on Facebook Share on X Share on LinkedIn AI.XYZ dashboard for managing your personal AI. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. AI Foundation , an AI research lab that gave us virtual Deepak Chopra , has launched AI.XYZ , a platform for people to create their own AI assistants. Let’s hope it’s a tangible example of how we’re going to get along fine with AI, rather than be terminated by them. The idea is that we should all feel better if AI assistants offload some of our daily tasks. The foundation calls it the world’s first AI life management platform, designed to promote a healthier work-life balance for busy people, said Lars Buttler, chairman of AI Foundation, a dual commercial and nonprofit entity. Read GamesBeat’s special issue, Gaming communities: Making connections and fighting toxicity. The platform enables users to design their own AI assistants that can safely support them in both personal and professional settings. Each AI is unique to its creator and can assist with tasks such as note-taking, email writing, brainstorming, and offering personalized advice and perspectives. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Unlike generic AI assistants from companies like Amazon, Google, Apple, or ChatGPT, each AI assistant designed on AI.XYZ belongs exclusively to its creator, knows the person’s values and goals, and provides more personalized help. The company sees a significant opportunity for workplaces and enterprises to provide each of their employees with their own AIs. About 83% of workers suffer from work-related stress and seven out of every 10 workers aren’t engaged and working to their full potential, according to Zippia. With AI.XYZ, everyone can have their own personal proofreader, content creator, and brainstormer, saving hours of time and improving work quality. Rob Meadows, CEO of AI Foundation, said, “Our goal is to give everyone back more time and energy to focus on the things they love – the things that make us human.” He added that AI.XYZ would not replace people but would extend everyone with their own AI that proactively works day and night to help them. How it works AI.XYZ is available in public beta and can be accessed on the web with an invitation code. Creators can interact with their AIs through text, voice, and video. A free subscription to AI.XYZ allows users to get started creating their own AI, while a premium subscription for $20 per month allows additional capabilities and customization options. The AI Foundation has collaborated with top research institutions like the Technical University of Munich to create “sustainable AI” for everyone. The foundation also pioneered the concept of allowing individuals to create their own AI in 2019 through collaborations with early adopters such as billionaire Richard Branson and Deepak Chopra, among others. It spun out run of its research projects, Reality Defender, which has become a deep fake detection platform trusted by governments. AI.XYZ said it protects user data and privacy while offering personalized benefits to its creators. Each AI is trained on its purpose, tasks to assist with, desired personality traits, preferred expressions, and ideal behaviors. Creators can expand their AI’s knowledge through document sharing, linking to websites like LinkedIn and noting personal memories for future reference. Creators can also decide what their AI will look and sound like by either cloning their own face and voice or choosing options from the AI.XYZ library. Origins AI Foundation was started in 2017. Investors include Twitter cofounder Biz Stone, Founders Fund, OVO, Endeavor, The Brandtech Group, Alpha Edison and Correlation Ventures. The foundation started as a nonprofit and remains both a nonprofit and a commercial entity today. If it finds commercial opportunities in its research, it can spin them out as startups. “Years ago, before AI was cool, we innovated in AI,” said Lars Buttler, chairman of AI Foundation, in an interview with GamesBeat. He noted that his prior company, online game publisher Trion Worlds , created cloud-based massively multiplayer online game worlds that had deeper AI characters. That company didn’t survive, but it helped the AI Foundation team think more about creating smart AI. “The idea of creating very smart AI for (non-player characters) NPCs — your sidekicks or even a version of you — never really left me,” Buttler said. “I teamed up a few years ago with Rob and we decided to just go for that. It was a time when there was no ChatGPT. AI was not really cool yet. Nowadays, Marc Andreessen, Bill Gates — everybody talks about how personal AI is going to be the big thing. Everybody will have their own personal AI. We decided to do this years ago and build really interesting technology.” Along the way, AI Foundation created Reality Defender as a for-profit company that could identify deep fakes for governments, banks and other parties to protect them from fraud. The foundation also got a lot of attention for creating a digital version of Deepak Chopra, the mindfulness and alternative medicine advocate, Meadows said. “But we still believe that personal AI is even more important and even bigger,” Buttler said. “And you know, as we have to deal with all these layoffs and AI encroaching on our jobs, having your own AI sidekick is a really good thing. And so we spent the last few years basically trying to make this affordable for the masses.” Then the large language models (LLMs) like ChatGPT came along and made things much easier. The company had focused on creation of digital characters and natural language conversational interfaces. That helped lead to the announcement of AI.XYZ. “We are now we are now capable to literally taking all these books, and any document you have, any content you have, and to literally just drop it in,” Buttler said. “So our technology has evolved a lot. We always use this example of how I learn Kung Fu from The Matrix. I can train my AI, everything about any topic, by literally just dropping all of this into my personal AI brain. And that’s a huge development. And it makes it so much easier to develop personal AI.” Digital Deepak by the billion? I asked if this was like Digital Deepak for everybody. “It’s always been the vision,” Meadows said. “A lot of our early investors were Hollywood talent agencies, etc. So we started with those that could afford to do it early on. But all of our research over the last few years have been laser focused on cost. It used to cost hundreds of thousands of dollars to create an AI. Now, it can cost $20.” Part of the task of the last two years has been getting the models to run on central processing units (CPUs) instead of graphics processing units (GPUs). The large language models coming out helped a lot on the NLP side. On the visual rendering side, as devices get more powerful, AI Foundation was able to do more and more of that off servers. “So the day is finally here where we can do that,” Meadows said. So for $20 a month, people can have their own personal version of an assistant. It won’t look quite as good as many AI characters like Deepak, but the world is moving fast and AI Foundation is integrating new innovations, models and research as they becomes available, Meadows said. Over time, your AI sidekick will get cheaper and smarter. The company isn’t using ChatGPT to ensure privacy and security as you talk with the AI about your life, family and work-life balance. Meadows said the uses include proofreading something for you, figuring out your next task, telling you to look at your emails or Slack messages, and other things on your to-do list. It can also help you with brainstorming, noting appointments coming up, or other tasks that take a lot of your time. It adds value by making you more efficient. And by helping you with personal things in addition to work, it can be more sticky, Buttler said. “It will have all the really the knowledge base of the company you work for,” Buttler said. “So you can think of this also as your expert AI HR person that is always available to you. It can help you with onboarding. You can just ask.” It also won’t be sharing all of the details with your company, Meadows said. So it keeps personal things, like your vacation whereabouts, private as needed. The long run The AI assistant are in a beta testing mode now, and a lot of people are on the waiting list. For the long run, Meadows is excited about industry developments, such as new plug-ins, new foundational models, and other innovations that can be plugged into the AI assistant. Over time, the goal is to have the AI assistant be the manager of all of the other AIs in your life. I asked if people wanted virtual AIs or robots. Five years ago, people wanted to clone themselves. A lot of people still want that. But with AI.XYZ, you can make your AI look like a teddy bear or an avatar from a game. You can give it a different voice. “We have a whole library, upload a picture of what you want it to look like, and we’ll clone that for you,” Meadows said. “And then as we move into the real world, as you start to have intelligent vacuum cleaners cleaning your house, do you want the manufacturer of your vacuum to put a brain in your vacuum? Or do you want your AI to be able to interface with your vacuum and know exactly the way you want your house done?” Buttler noted it’s easier to focus on digital AI assistants, rather than robots. He noted that if an edge case goes wrong, a robot can do a lot more damage to you than a digital AI. Meadows noted that more than half of AI Foundation’s code base is written by AI now. The foundation has raised more than $30 million. The company is just starting to scale up beta testing for the AI assistants. As for the competition, it’s the human executive assistant who manages your personal and professional life. But that person won’t cost you just $20 a month. Eventually, the big tech companies will be competitors, as well as other AI firms. AI Foundation has a number of pilots going that will show off the value of the AI assistants, Meadows said. As for AI taking over the world and destroying humans? Buttler said there is too much emotion in the discussion now and AI assistant technology isn’t anywhere near the level of general artificial intelligence where machines would think for themselves. AI Foundation clearly doesn’t believe its products or others being created now are a threat to humanity or will destroy jobs. “We don’t believe the world should stop innovating and hit pause right now,” Meadows said. “We think it’s important that the faster we put this in the hands of everybody, to put AI in the hands of everybody.” He added, “We’re putting a lot of guardrails in there.” If someone talks about hurting themselves to an AI assistant, it will inform them of hotline protections and how to get help. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,168
2,023
"Telesign Trust Index a call to action for any enterprise that’s discounting cybersecurity | VentureBeat"
"https://venturebeat.com/security/telesign-trust-index-call-to-action-enterprise-discounting-cybersecurity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Telesign Trust Index a call to action for any enterprise that’s discounting cybersecurity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The 2023 Telesign Trust Index report shows how the alarming rise of digital fraud is jeopardizing customers’ trust that brands will protect their privacy. Digital trust is increasingly fragile. The report highlights consumers’ concerns about digital fraud, their expectations of companies, and the dire consequences for brands that fail to maintain consumer trust with a strong cybersecurity posture. Preserving trust is the cornerstone of business success in today’s digital economy. The insights the Index delivers should alarm any business that’s discounting or de-emphasizing the value of a solid cybersecurity strategy to protect customer data. With the Trust Index projecting digital payment transactions to reach $2 billion in 2023 and $3.5 billion by 2027, businesses must confront the growing threat of cybercrime, including digital fraud. VentureBeat recently interviewed Telesign CEO Joe Burton about the report’s findings. “Ninety-four percent of consumers said cyber-fraud is the brand’s, the company’s problem,” Burton said. “You not only need to protect them, but you’ve got to bring them on the journey and make them feel right. It’s an opportunity to actually deepen the brand relationship rather than lose it.” He added: “A third of everybody we talked to admitted to being a victim. Well over half of the victims said they lost money, and a third of the victims said they lost more than $1,000.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Closing the trust gap must be a high priority PwC’s 2023 Trust Survey reinforces Burton’s insights. Nine in 10 executives say building and maintaining trust improves the bottom line, and 92% agree that enterprises are responsible for building trust. Yet PwC found a significant gap between how trustworthy executives think their company is and what customers believe. Eighty-four percent of business executives think customers highly trust their company, yet only 27% of customers say the same. Closing that gap must start by prioritizing customer data security and privacy while delivering an intuitive experience in each app and platform. Digital fraud widens the gap PwC found. And the 2023 Telesign Trust Index shows what happens when businesses do nothing to protect their customers. Here are the Index’s key highlights, followed by a practical roadmap any business can take to preserve and grow trust: Consumers overwhelmingly believe companies, not individuals, must protect their digital privacy. Ninety-four percent of consumers surveyed agreed that businesses bear responsibility for protecting consumers’ digital privacy. Consumers are seeing the scope and sophistication of digital fraud increasing quickly. Half of consumers polled were apprehensive regarding telephone and other forms of digital fraud, which they perceive as having increased significantly over the last two years. Nearly a quarter of consumers said they would prefer to be audited by the IRS, or to never eat chocolate again, than become a victim of digital fraud. Consumers pay a high price both financially and psychologically for digital fraud. Thirty percent of consumers surveyed reported they had been victims of fraud in the past three years, and 61% reported financial losses, with a third of victims reporting losses of more than $1,000. Even more troubling is the emotional and psychological toll digital fraud inflicts on victims. Four in 10 cite mental health concerns, and 44% characterize the incident as having hurt them. Breaches and data leaks quickly kill consumer loyalty and trust. A hefty 43% of data breach victims personally stopped associating with the brand altogether. Forty-four percent of consumers who were victims of a brand’s breach told friends and family not to associate with the brand. One in three victims shared the incident on social media, amplifying negative brand perceptions. Building a practical roadmap Telesign’s Trust Index suggests a practical roadmap is needed to prevent fraud, protect user data and privacy and maintain consumer trust. Maintaining trust requires company-wide cybersecurity and digital fraud prevention. Organizing strategies by the threats they address is the first step, shown in the following table: Fighting digital fraud: A perfect use case for machine learning Burton told VentureBeat, “Customers are okay with friction if they understand that it’s there to keep them safe. So the idea that you should never be asked for a password, never be asked for two-factor authentication, never be asked for more information — it’s all about how you do it that matters.” Balancing increased cybersecurity with a more intuitive user experience is essential to building consumer trust. Instead of brute-forcing consumers to authenticate themselves with a series of questions many forget after first filling out a given application’s security parameters, Burton believes machine learning can assist in delivering a more intuitive, trust-generating experience. Many approaches to authentication assume a breach first and create roadblocks for consumers who just want to get logged in and transact business, get support or ask questions. Burton said these experiences can be made more intuitive, and brands can get the login experience to be more secure and faster, by using a deep rich machine learning-driven digital identity system, such as Telesign’s. He said he’s seeing machine learning-driven digital identity systems helping to stop fake accounts, reduce onboarding issues, reduce the number of account takeovers and reduce promotion abuse. He credits his company’s use of machine learning as key to performing real-time risk-scoring that can identify malicious activity and alert threat analysts immediately so they can reduce the incidence of attacks in the future. Cybersecurity is a business decision Telesign’s Trust Index quantifies the downside of not respecting trust as a revenue and growth accelerator. Its cautionary findings show why having a practical roadmap to improve cybersecurity is essential to creating revenue now and in the future. More CISOs are quantifying the impact of cybersecurity and zero-trust investments on revenue and profit growth — and seeing their contributions as increasing the possibility of promotion to board-level roles. CISOs who have been tentative about quantifying how their many initiatives and projects impact revenue should take Telesign’s report as a call to action. Reducing e-crime and fraud is a great place to start because it strikes at the center of their ability to deliver consistent, trusted experiences to consumers, the foundations of any business growth. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,169
2,023
"FTC fines Amazon $25M for violating children's privacy with Alexa | VentureBeat"
"https://venturebeat.com/security/ftc-fines-amazon-25m-for-violating-childrens-privacy-with-alexa"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages FTC fines Amazon $25M for violating children’s privacy with Alexa Share on Facebook Share on X Share on LinkedIn Credit: Jan Antonin Kolar on Unsplash Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Some mildly bad news for the Bezos money machine: Amazon is being slapped with a $25 million fine over its practices for handling children’s data through its Alexa voice-activated assistant and Echo devices. That sounds like a lot to those of us who aren’t mega conglomerates or their leadership, but it’s about two days worth of income for Amazon based on its recent sales performance. The Federal Trade Commission (FTC) and the Department of Justice (DOJ) today announced they have jointly filed a complaint against Amazon, saying the company “prevented parents from exercising their deletion rights under the COPPA Rule, kept sensitive voice and geolocation data for years, and used it for its own purposes, while putting data at risk of harm from unnecessary access.” “Today’s settlement on Amazon Alexa should set off alarms for parents across the country — and is a warning for every AI company sprinting to acquire more and more data.” The child privacy laws Amazon is accused of breaking COPPA, the Children’s Online Privacy Protection Act , refers to a 1998 law passed by the U.S. Congress that states that “operators of commercial websites and online services (including mobile apps and IoT devices, such as smart toys)” that reach children under age 13 must post clear privacy policies, provide direct notice to parents and allow parents to delete the information and prevent further collection. “Amazon’s behavior in retaining children’s voice recordings indefinitely and ignoring parents’ requests for deletion contravenes COPPA and prioritizes profit over privacy,” argued Samuel Levine, director of the FTC’s Bureau of Consumer Protection. “COPPA unequivocally prohibits companies from indefinitely storing children’s data without just cause, especially not for algorithm training purposes.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In a response emailed to VentureBeat, an Amazon spokesperson wrote: “We built Alexa with strong privacy protections and customer controls, designed Amazon Kids to comply with COPPA, and collaborated with the FTC before expanding Amazon Kids to include Alexa. As part of the settlement, we agreed to make a small modification to our already strong practices, and will remove child profiles that have been inactive for more than 18 months unless a parent or guardian chooses to keep them.” A warning to the AI industry One of the FTC Commissioners, Alvaro M. Bedoya, also took the opportunity to tweet out a direct cautionary note to the fast-growing AI industry and any companies using machine learning : “Today’s settlement on Amazon Alexa should set off alarms for parents across the country — and is a warning for every AI company sprinting to acquire more and more data.” Machine learning is no excuse to break the law. Today's settlement on Amazon Alexa should set off alarms for parents across the country – and is a warning for every AI company sprinting to acquire more and more data. My full statement with @LinaKhanFTC and @RKSlaughterFTC : pic.twitter.com/clGVikq8gz In response to these charges, a proposed federal court order has been issued, pending approval, mandating that Amazon delete inactive child accounts, certain voice recordings, and geolocation data, and prohibit the company from using such data to train its algorithms. Where Amazon went wrong Despite Amazon’s repeated assurances to users about the ability to delete voice and geolocation data collected by its Alexa voice assistant service, the complaint alleges that the company reneged on these promises by retaining and leveraging the data for improving its Alexa algorithm. Amazon, a leading global retailer, amasses extensive user data, including geolocation and voice recordings. It defends its data handling practices by claiming its Alexa service and Echo devices are designed with user privacy in mind, including parental controls for deleting geolocation data and voice recordings. The complaint reveals that even when parents requested the deletion of their children’s voice recordings, Amazon failed to completely erase the transcripts from its databases, undermining the COPPA rule that requires parental consent for the collection of children’s data, among other measures. The FTC also filed a complaint today against Amazon’s home security subsidiary, Ring , over allegations that it jeopardized its customers’ privacy by allowing any employee or contractor to access private videos, and for failing to establish basic privacy and security measures. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,170
2,023
"The best tech job cities of 2023 | VentureBeat"
"https://venturebeat.com/programming-development/the-best-tech-job-cities-of-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Jobs The best tech job cities of 2023 Share on Facebook Share on X Share on LinkedIn Despite the wave of widespread layoffs that hit the tech industry in 2022 and 2023, the technology sector is still flourishing. Tech companies are projected to grow by 15% by 2031, according to data from the U.S. Bureau of Labor Statistics. Tech salaries are also on the rise thanks to a demand for talent across nearly every industry. A recent industry report indicated that salaries increased 2.3% between 2021 and 2022, reaching an average tech salary of $111,348 per year. The report also showed that there were quite a few interesting shifts when looking at salary by location. If you’re looking for a career in tech, it still pays to live in traditional innovation hubs like Silicon Valley, Boston or New York City. Silicon Valley remains the most prominent (and expensive) U.S. tech hub, with a talent pool of nearly 380,000 tech workers. And even despite the growing number of workers abandoning the hub following the trend of remote work, tech layoffs and this year’s Silicon Valley Bank collapse, the area still continues to be a top destination for tech professionals. It’s seen as a center of innovation and an especially attractive area for ambitious entrepreneurs and startup developers. It also boasts the highest average tech salary of $144,962 per year and one of the best climates year-round. Check out global tech organizations on the VentureBeat Job Board like Apple which is looking for a Data Collection/Machine Learning Lead to assist in delivering CVML-based algorithms for ground-breaking features such as augmented reality (AR), Cinematic Video, RoomPlan, FaceID and Animoji. All of these features depend on high-quality data collection, data understanding and data curation. You’ll be responsible for the team’s data strategy and as such will need proficiency in programming languages including Python, C++, or similar and at least one major machine learning framework such as PyTorch, Jax or Tensorflow. The base pay range for this role is between $161,000 and $278,000. Seattle is another hotspot for tech talent in the U.S clocking in an impressive 32% growth rate over the last five years. Home to Amazon and Microsoft, as well as many startups, Seattle offers a thriving tech scene and a relatively lower cost of living than the Bay Area. A total of 160,660 residents work in tech, or 83.78 per 1,000 , which is one of the highest concentrations in the Pacific Northwest. Tech jobs here are expected to grow by 2.5% in the next year and by 8.7% by 2026. With an average salary of $129,456 annually, Seattle boasts some of the highest average wages for IT jobs. On the radar are leading firms like Adobe which is currently recruiting a Sr User Experience Designer to join its Adobe Design dynamic team in Seattle. To apply, you’ll need a Bachelor’s Degree in UX, HCI, Design or a related field and a minimum of five-plus years’ of shown experience in product design, building and delivering consumer and/or enterprise products plus direct experience across a range of design approaches and methodologies, from persona development to prototyping and the many steps in between. The U.S. pay range for this position is $111,700 to $206,100 annually. While America’s large, coastal cities still contain the lion’s share of tech talent, mid-sized tech hubs like Salt Lake City, Portland and Denver have put up strong growth numbers in recent years. Known as the ‘Silicon Forest,’ Portland offers high-paying tech jobs in an area with a lower cost of living expenses than Seattle. The average salary for tech workers in Portland is about $127,734 and median tech wages in Portland are 103% higher than median national wages. Portland’s startup scene is thriving, and the city is rife with incubators and coworking spaces. It’s a city proud to champion diversity and inclusivity and of the 30,000 tech workers in the area, 32% identify as female. Opportunities in Portland include jobs with companies like Autodesk which is looking to hire a Senior Full Stack Engineer to join its Digital Experience Platform. As well as the minimum requirements (BA/BS degree plus five years’ industry experience) the preferred candidate will also have advanced knowledge in architecting AWS based systems, JavaScript modules and Object-Oriented Programming and experience working with AEM or other CMS technologies. For U.S.-based roles, the starting base salary is between $109,400 and $188,760. As more and more tech salaries increase, so too has there been a rise in nontraditional tech hubs in the U.S. Emerging as Florida’s tech capital, Tampa holds over 25% of all tech jobs in the state. The city features over 50 software and IT companies, with more likely to take root over the next few years. Through initiatives like the Embarc Collective, Tampa supports tech startups and early-stage tech companies to continue tech growth in the city. Over the last few years, Charlotte has consistently been one of the best tech job cities on the East Coast. North Carolina is full of rich diversity — of all the tech companies in Charlotte, 37% of their CEOs are women. On top of a cost of living that is 1.7% lower than the national average, North Carolina boasts a flat income tax rate regardless of annual earnings, as well as low property and sales taxes. This means your median salary of $118,465 will go further. Other notable growth stories outside of the more established tech hubs include Columbus, where average tech salaries grew 16% year over year, and Phoenix where salaries grew 26%. Finding your dream tech job isn’t just about luck — it’s also about factors totally within your control. When you’re on the job hunt, one of the success-defining factors is as simple (yet difficult) as looking in the right place. If you’re reevaluating your career options or searching for a role that suits your life, visit the VentureBeat Jobs Board today VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,171
2,023
"3DFY lets devs create 3D models based on text prompts | VentureBeat"
"https://venturebeat.com/games/3dfy-lets-devs-creator-3d-models-based-on-text-prompts"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture 3DFY lets devs create 3D models based on text prompts Share on Facebook Share on X Share on LinkedIn This sophisticated 3D sword was created from a text prompt. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. 3DFY.ai announced the launch of 3DFY Prompt, a generative AI that lets developers and creators build 3D models based on text prompts. The Tel Aviv, Israel-based company said the tech democratizes professional-quality 3D model creation, enabling anyone to use text prompts to create high-quality models that can be used in gaming, design or virtual environments. Until now, high-quality 3D model creation required professional labor and lots of it. Some recent advances in AI, such as those recently announced by Nvidia, Google and OpenAI, offer effortless 3D model creation. But 3DFY said those solutions severely compromise asset quality, which, in practice, hinders the assets’ usability. To be more than a cool fad, generative AI models need to be tangibly useful, 3DFY said. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! 3DFY’s text-to-3D technology uniquely offers a scalable AI-based solution which generates 3D models comparable to those produced by human 3D modelers. The tradeoff for the professional model quality comes in the form of a category-bound solution, meaning only objects within a range of defined classes can be generated. However, with a continuously expanding range of available categories, this limitation will quickly diminish with time. The result is beautiful 3D models, which are always divided into semantically meaningful parts, constructed with professional-level mesh topology available in multiple levels of detail with addition of physically based rendering textures at any desired resolution. In addition, 3DFY.ai employs strictly ethical AI, as the vast amounts of training data utilized to train the company’s AI are 100% synthetic and generated in-house using advanced computer graphics methods. Eliran Dahan, CEO of 3DFY, said in a statement, “With 3DFY Prompt, 3D creation can now be available to everyone, regardless of budget or experience, in ways that enable self-expression and save time, money and manual work for creators and without having to worry about possible copyright issues. We’re already working on additional object categories and new, unique functionality. Our overarching vision is to make 3D authoring as fun and easy as playing an online game. We want everyone to be empowered to create 3D content.” 3DFY Prompt is live, and can be experienced for free right now. You can play around, creating your own previously unexpressed desires (“an ottoman that looks like an Oreo” anyone?) and save them for later. The company has raised $3 million in a pre-seed round to date, and it has seven people, with experience in computer vision, graphics, and AI and machine learning. Dahan started the company in 2019 with cofounder Tal Kenig. In an email to GamesBeat, Dahan said, “[We] go way back and worked together for many years developing medical imaging technologies. As radiology veterans, we are 3D natives, so when we learned how much manual labor is involved in the production of computer graphics 3D assets, we immediately knew that’s where we want to make an impact. Ever since we started out, we are pursuing a clear vision: To make 3D model creation simple and easy for everyone, without compromising on quality. Somewhat similar to what Canva has done for graphic design. What you see now as the first version of 3DFY Prompt is just the tip of the iceberg, as we are working on additional, unique functionality, materializing this vision one step at a time.” Asked if the AI could eliminate jobs of game developers, Dahan said, “We think we will see a trend here, similar to other generative AI fields, where creators would require less technical prowess and more skills around creatively using the new capabilities. But in any matter, it will be humans working with machines.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,172
2,023
"Microsoft’s data and analytics platform Fabric announces unified pricing, pressuring Google and Amazon | VentureBeat"
"https://venturebeat.com/data-infrastructure/microsofts-data-and-analytics-platform-fabric-announces-unified-pricing-pressuring-google-and-amazon"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Microsoft’s data and analytics platform Fabric announces unified pricing, pressuring Google and Amazon Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft today made good on its promise to offer simpler, more efficient pricing for its Microsoft Fabric suite, a new end-to-end platform for analytics and data workloads. The pricing is based on how much total compute and storage a customer uses, VentureBeat has confirmed. It will not require customers to pay for separate buckets of compute and storage for each of Microsoft’s multiple services. The move ups the ante on an array of competitors, including Google and Amazon , who fiercely vie for market share with Microsoft. Those competitors offer similar analytics and data products based on their own clouds, but (Amazon in particular) charge customers multiple times for the various, discreet analytics and data tools used on their clouds. And while Google has created its own fabric offering called Google DataPlex to avoid charging in buckets, Google’s analytics offering isn’t as comprehensive, said Forrester analyst Noel Yuhanna. The pricing sheet, which shows set pricing for compute and storage across Fabric, is expected to be published tomorrow on Microsoft’s blog. An example of the pricing for U.S. west 2, which covers part of the West Coast, was obtained early Wednesday by VentureBeat and is embedded at the bottom of this story. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The new pricing comes after Microsoft announced it was integrating its various data and analytics tools last week into the single Fabric suite. The suite integrates six separate tools, including Azure Data Factory, Azure Synapse Analytics and Power BI, into a unified experience and data architecture. The offering is delivered as a software as a service (SaaS), and is designed for engineers and developers to more easily extract insights from data and present them to business decision-makers. A singular data lake based on an open format Fabric centers around a centralized data lake called Microsoft OneLake that stores a single copy of data in one place. OneLake is built around the open-source Apache Parquet format, allowing for a unified way to store and retrieve data natively across databases. All Fabric workloads are automatically wired into OneLake, just like all Microsoft 365 applications are wired into OneDrive. This is where the savings come in. OneLake eliminates the need for developers, analysts and business users across a company to create data silos by provisioning and configuring their own storage accounts for the various tools they use. So, for example, when a user of Microsoft’s business intelligence tool Power BI wants to run analysis on a Microsoft Synapse data warehouse, they no longer send a SQL query to Synapse. Power BI “simply goes to OneLake and pages the data,” according to Arun Ulagaratchagan, Microsoft’s corporate VP of Azure Data, who spoke with VentureBeat Tuesday. “This does two things for customers,” he continued. “First, there’s pretty substantial performance acceleration, because if there’s no SQL query being executed, it’s simply going to run data that shares the same open format, across both Synapse and Power BI.” He added, “The second thing is a big cost reduction for customers. Because you’re not paying for the SQL queries, because there are no SQL queries being done…this idea of a lake-centric and open architecture is so powerful to customers because they don’t have to worry about being locked in. They don’t have to worry about the costs piling up.” Any unused compute capacity on one workload can be utilized by any of the workloads. Fabric adds generative AI and multi-cloud Fabric promises a few other advances. Microsoft will soon add Copilot, a chatbot using generative AI , to every product interface within Fabric. This will allow developers and engineers to use conversational language to ask questions about data or to create data flows, pipelines, code and build machine learning (ML) models. Second, Fabric supports multi-cloud. Through something called “Shortcuts,” OneLake can virtualize data lake storage in Amazon S3 and Google storage (coming soon). Microsoft also announced Data Activator , a no-code way for business analysts to automate actions based on data. For example, a sales manager can be alerted if a particular customer is behind on their payments. In announcing Fabric last week, Microsoft said pricing would come in a separate step (the formal release expected tomorrow). The move saves customers money because Microsoft no longer forces them to pay several buckets of fees for each of the separate tools. For example, being charged once if they use Power BI, again if they use Microsoft’s analytics tools, and again if they use Microsoft’s warehousing tools. Microsoft also said a single security model will be used for OneLake, where all applications enforce a single security management system on the data as they process queries and jobs. Ulagaratchagan said he’d pitched the idea of Fabric to 100 of the Fortune 500 companies over the past few years, and chief data officers told him they were “tired of paying what they consider an integration tax.” Customers seeking simplicity, speed This “integration tax” was levied not only from separate products from Microsoft, but from the hundreds of other vendors selling data and analytics products that enterprise companies need. “This is why we introduced Microsoft fabric: To give customers an end-to-end analytics platform that goes from the database to the business user making decisions, and to give every developer an opportunity to sign up within seconds and get real business value within minutes,” said Ulagaratchagan. Amalgam Insights analyst Hyoun Park said the move by Microsoft puts pressure on Amazon and Google, its two largest competitors in the cloud who have also been charging customers fees for separate buckets of services that they offer. “For Amazon, that could be 200 different buckets, which is part of what makes cloud cost so challenging,” said Park. An integrated package of capability Fabric also puts pressure on some big vendors that only offer one part of the analytics and data stack, Park said. For example, it challenges Snowflake , a data warehouse that uses its own proprietary data formats and requires customers to transform their data to use in other applications. Similarly, it raises questions for business intelligence vendors like Qlik, TIBCO and SAS. “Part of the innovation here is that Microsoft is providing all of these as an integrated package of capability,” said Park. “And as simple as that sounds. It’s not something that the majority of data and analytic vendors are able to provide.” On the other hand, the more ambitious global offering will make it a harder sell for Microsoft, according to Park. By combining products into one, Microsoft’s Fabric isn’t targeting different products to different roles within an organization. Microsoft will now have to sell to the executive suite. Until now, engineers might seek to buy Microsoft’s Data Factory product. Analysts would vouch for Microsoft’s Power BI product. And developers might want Microsoft Synapse. “This is definitely billed as executive sales because nobody below the C-level can okay this,” said Park. However, he pointed out that Microsoft is well-positioned to be able to make that pitch. Forthcoming pricing changes Here’s what Microsoft said it will be posting tomorrow about pricing: Rather than provisioning and managing separate compute for each workload, with Microsoft Fabric, a bill is determined by two variables: The amount of compute provisioned and the amount of storage used. Compute: A shared pool of capacity that powers all capabilities in Microsoft Fabric, from data modeling and data warehousing to business intelligence. Pay-as-you-go (per sec billing with one minute minimum). Storage: A single place to store all data. Pay-as-you-go ($ per GB / month). By purchasing Fabric capacity, customers will get a set of capacity units (CUs). Capacity units (CUs) are units of measure that represent a pool of compute power needed. Compute power is required to run queries, jobs, or tasks. The CU consumption is highly correlated to the underlying compute effort needed for the tasks performed during the processing time by the capability. Each capability and the associated queries, jobs, or tasks have a unique consumption rate. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,173
2,023
"Databricks accelerates migration to data lakehouse with new technology partner | VentureBeat"
"https://venturebeat.com/data-infrastructure/databricks-accelerates-migration-data-lakehouse-new-technology-partner"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Databricks accelerates migration to data lakehouse with new technology partner Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Databricks, a vendor known for setting up data lakehouses for enterprises, today announced a partnership with database virtualization player Datometry to facilitate easy transitions from legacy data warehouses. The company said the integration will give teams a simple way to migrate data warehouse workloads to Databricks’ lakehouse architecture without worrying about usually pressing aspects like cost or time. The move marks another effort from Databricks to lure more customers to its data platform and better take on competition such as data cloud platform Snowflake. Datometry’s 4x faster migration to data lakehouse Moving data and applications to the cloud from an on-premises setup is no easy task. Companies have to hire system integrators to rewrite the embedded SQL and configuration and make the whole thing work on the new platform. This not only takes a lot of time and capital but is also prone to error. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Founded in 2013 and backed by $34 million in venture capital, San Francisco-based Datometry bridges this gap by providing enterprises with an SaaS platform that lets data and applications written for an on-prem data warehouse run natively in the cloud. The solution continuously intercepts the workloads’ communication with the original database and translates and redirects it to the new cloud platform. It delivers everything as is, including SQL statements as well as features like stored procedures, macros and recursive queries in real time. With this tie-in with Databricks, Datometry has joined the Ali Ghodsi-led company’s technology partner program. The move will see Datometry provide its platform as a validated integration for the Databricks lakehouse, allowing enterprises to quickly connect and pull in their data and applications from legacy on-prem platforms. The company says it can deliver migrations four times faster and at just 20% of the cost of other approaches. “We’re proud to be partnering with Databricks,” Mike Waas, CEO of Datometry, said. “This partnership will enable organizations to break free from the vendor lock-in of legacy databases and adopt a lakehouse architecture four times faster than with any other approach.” Partner program drives visibility With its technology partner program , Databricks provides relevant third-party solutions to its customers, allowing them to work seamlessly in their lakehouses. Meanwhile, these third-party solutions get a newer set of customers to target and work with. However, in this case, it is not just Datometry getting new customers. The integration for migrating data and apps will accelerate potential customers’ journeys to Databricks’ lakehouse services. Additionally, when customers can quickly bring workloads into the lakehouse and put them to maximum use, Datometry’s revenue, which operates on a pay-as-you-go basis, will also grow. A similar tactic has been adopted by Databricks’ competitor Snowflake. In January, it signed an agreement with Mobilize Net to acquire SnowConvert, a suite of tools that uses sophisticated automation techniques with built-analysis and matching to re-create functionally equivalent code for tables, views, stored procedures, macros and BTEQ files in the Snowflake data cloud. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,174
2,023
"Pear VC Closes an Oversubscribed $432 Million Seed Fund | VentureBeat"
"https://venturebeat.com/business/pear-vc-closes-an-oversubscribed-432-million-seed-fund"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Pear VC Closes an Oversubscribed $432 Million Seed Fund Share on Facebook Share on X Share on LinkedIn MENLO PARK, Calif.–(BUSINESS WIRE)–May 30, 2023– Today, Pear VC (“Pear”) has announced that it closed its 4th fund, closing at $432 million. According to Prequin Ltd., Q4 2022 VC fundraising was down 65% from the previous year. 1 This fundraising by Pear is one of the largest seed funds raised in recent years by a non-multistage fund, marking a bright spot in an otherwise choppy VC environment. The funding will be used to continue doubling down in one of the brightest areas in venture investing: pre-seed and seed, especially in competitive areas like AI, and programs like PearX, Pear’s early-stage bootcamp for founders, Pear’s Female Founders Circle, a community for technical female entrepreneurs, and Pear Dorm, which supports student builders. “We’re operators and founders specializing in pre-seed and seed – that’s all we do all day every day. We are a generalist firm with specialist investors and operators who act as deep thought-partners and know founders’ industries inside and out,” shared Mar Hershenson, Pear’s Founding Managing Partner. Pear’s team has spent the last decade building the firm from the ground up. It’s a top performing firm, having seeded 3 public companies (DoorDash, Guardant Health, and Senti Bio) and many others valued over $1B (Gusto, Branch, Aurora Solar, Vanta and others). Pear’s Fund I is a top 5th percentile performing fund, in terms of net DPI. 2 Pear was started by Pejman Nozad and Mar Hershenson in 2013. “Pear believed in DoorDash when we were still a tiny startup near Stanford called Palo Alto Delivery. They saw our potential and offered unwavering support to us from our earliest days through the DoorDash IPO,” commented Stanley Tang, Co-founder and CPO of DoorDash. This fundraise comes at a time when markets continue to cool, yet many of Pear’s Limited Partners doubled down on their investments from previous Pear funds, showing that investors are particularly impressed with Pear’s track record to date and prospects for the future. LPs realize the importance of having a dedicated seed fund in their portfolio at this time. “The new fund will bolster our mission to help early-stage founders build legendary companies from the ground up. This fund will allow us to build the infrastructure that helps founders go from idea stage to product-market fit. This includes founder services like talent, go-to-market, engineering, marketing, and more. For example, we plan to scale out our talent services team led by Matt Birnbaum , which includes four senior recruiters. For founders, helping them hire for their early teams is part of our core offering, which we know from experience can really shape the entire culture and tone of a startup,” added Pejman Nozad, Pear’s other Founding Managing Partner. Pear runs multiple programs to support early-stage entrepreneurs, including PearX , a 14-week long early-stage bootcamp for founders. Pear has already had a number of breakout PearX companies, including Affinity, Xilis, Federato, Viz.ai, and Cardless, which all got their start through the PearX program. With Fund IV, Pear will be adding a new track: PearX for AI, a program tailored for AI builders with added benefits like hacker space, cloud credits, and 1:1 coaching with AI experts from the Pear team and organizations like Google Cloud. Pear also has a deep history in investing in student builders through the Pear Dorm program. Today, 50% of Pear’s portfolio companies are student-founded, representing $15 billion in total valuation, including Branch, Aurora Solar, Affinity, Viz.ai, and Bioage. With Fund IV, Pear will be expanding its Dorm team and introducing new programs. The fund also arrives at a moment where Pear is stepping up to support more female entrepreneurs. Pear runs a Female Founders Circle program, which is focused on helping train technical female founders moving into entrepreneurship. To date, 41% of Pear’s portfolio companies have a female founder and the Pear team is 50% female itself. With Fund IV, Pear looks to increase the size and scale of the Female Founders Circle program. About Pear VC : ​​​​ Pear VC is a pre-seed and seed stage VC firm that partners with entrepreneurs from their earliest days to build category defining companies. We’ve invested in top-tier companies including DoorDash, Gusto, Branch, Guardant Health, Aurora Solar, Vanta, Affinity and so many more. We’re company builders, having founded 10+ companies ourselves: we help companies find product-market fit, recruit their first hires, and overcome other critical business challenges. __________________ 1 https://www.wsj.com/articles/venture-fundraising-hits-nine-year-low-c2b4774 2 Cambridge Associates benchmarks View source version on businesswire.com: https://www.businesswire.com/news/home/20230530005616/en/ Jill Puente – [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,175
2,023
"Particular Audience Unveils Revolutionary 'Adaptive Transformer Search' Solving the $300 Billion eCommerce Search Problem | VentureBeat"
"https://venturebeat.com/business/particular-audience-unveils-revolutionary-adaptive-transformer-search-solving-the-300-billion-ecommerce-search-problem"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Particular Audience Unveils Revolutionary ‘Adaptive Transformer Search’ Solving the $300 Billion eCommerce Search Problem Share on Facebook Share on X Share on LinkedIn Brand new AI-powered site search understands shopper intent, capable of reducing zero-search results by up to 70%. SYDNEY–(BUSINESS WIRE)–May 31, 2023– Particular Audience, a pioneer in advanced artificial intelligence technologies for ecommerce, today announced the launch of its revolutionary Adaptive Transformer Search (ATS). This AI-powered search technology promises to solve the underlying problems plaguing ecommerce search as reported by 94% of consumers, representing a significant leap forward in search efficiency and customer experience. Discovery on the Internet has come to rely on search and recommendation technologies for fast and intuitive information retrieval. While legacy keyword search has worked well enough, it still suffers from inherent flaws associated with exact word matching and a tangle of rules that need constant management. These issues are exacerbated by messy and/or incorrect metadata in a retail website’s product feed. The cost of this problem is estimated by Google to be worth $300bn per annum in the USA alone. 76% of customers report they abandon a retailer after failing to find what they are searching for, with 48% then purchasing the item elsewhere. More than half report they typically abandon their entire shopping cart after failing to find a single item on a website. Eighty-five percent of consumers say they view a brand differently after experiencing search difficulties and 77% avoid websites where they’ve had poor search experiences. Customers are not alone in acknowledging the extensive problem of bad site search; retailers agree, 90% of US based website managers surveyed are concerned about the cost of search abandonment to their business, while more than half have no clear plan for improvement. Unlike conventional ecommerce search engines that rely on exact keyword matching and continuous manual updates, ATS is designed to understand the meaning and context of words in a query. This innovative approach eliminates the need for extensive manual configuration, reducing overhead for website owners and facilitating an intuitive search experience for customers. Longtail search queries can make up to 80% of site search and this is one of the key opportunities that ATS is best placed to solve. “At Particular Audience, we’ve always focused on addressing the root causes of discovery abandonment with applied artificial intelligence,” said CEO, James Taylor. “With ATS, we’ve harnessed the power of Large Language Models, paired with our own vertical tuning to generate the most relevant search results right out of the box. No matter how niche or conversational a search is.” Adaptive Transformer Search is built using transformer models, converting sequential long form text (retailer catalogue and website data) into vectors in high-dimensional space. The conversion of a sequence of words into a vector is known as sentence embeddings, a concept popularised by large language models such as Google’s BERT and OpenAI’s GPT. This means ATS is capable of understanding the meaning in a sentence and can, for example, understand the difference between ‘getting a laptop online using a credit card’, and ‘getting a credit card online using a laptop’. Adaptive Transformer Search leverages PA’s proprietary Vertical Tuned Models (VTMs), creating sentence embeddings that adapt from localized reinforcement learning. This continual learning process enables ATS to improve its precision and accuracy specific to individual retailer websites. “Automating the tuning of search results through ‘query-click-pair’ reinforcement learning has been a game changer for our ATS product. What this means in simple terms is that our models continually learn from user search queries, understanding context to optimise results for future queries. Adapting search relevance in real-time to evolving consumer context has never been possible on retailer websites before now,” said Particular Audience’s Head of Product, Patrick DiLoreto. The positive impact of ATS on an ecommerce website is profound. It increases search revenue by more than 20% when compared with legacy keyword search technology, it reduces the instance of zero-search-results by as much as 70%, and enhances customer engagement through better ranking of results. This breakthrough technology is purpose-built to facilitate intuitive and efficient search experiences for every customer, ensuring that they find what they are looking for every time they shop. “Large Language Models are generating a lot of buzz, and we are proud to be at the forefront of AI in ecommerce with the introduction of Adaptive Transformer Search,” added CEO, James Taylor. “We believe this revolutionary technology will not only transform the way consumers shop online but also set a new standard for search efficiency and customer experience in the ecommerce industry.” For more information on Adaptive Transformer Search and how it’s changing the face of ecommerce, visit Particular Audience’s website at https://particularaudience.com/search/. You can also read the ATS whitepaper at https://whitepaper.particularaudience.com/Adaptive%20Transformer%20Search.pdf. About Particular Audience Particular Audience is a leading AI-as-a-Service technology company specializing in ecommerce solutions. Leveraging the power of advanced artificial intelligence, PA is committed to reinventing the online shopping experience and addressing the underlying problems that impede consumer search and discovery. Particular Audience is paving the way for a more intuitive and efficient online shopping experience both on retail websites and via its consumer application https://similarinc.com/. For more information, visit https://particularaudience.com View source version on businesswire.com: https://www.businesswire.com/news/home/20230530005708/en/ James Taylor Founder & CEO [email protected] +61 (0) 451 006 413 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,176
2,023
"How Call of Duty dev made the shift to fantasy with Immortals of Aveum | VentureBeat"
"https://venturebeat.com/business/how-call-of-duty-dev-made-the-shift-to-fantasy-with-immortals-of-aveum"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Call of Duty dev made the shift to fantasy with Immortals of Aveum Share on Facebook Share on X Share on LinkedIn Jak is the hero of Immortals of Aveum. For a Call of Duty game, Immortals of Aveum flew in under my radar. I jest, of course, as the title is the debut fantasy game from Ascendant Studios. It so happens that Bret Robbins , CEO and game director of Ascendant, dreamed of taking Call of Duty gameplay into new territory, like a fantasy game with magic. While that sounds a bit absurd, I really enjoyed the gameplay in a recent preview at Electronic Arts, which is publishing the title on July 20. As you cast spells, it reminds you of wielding a shotgun or assault rifle in a Call of Duty game. The single-player-only game is coming to the PlayStation 5, Xbox Series X/S and the PC. And while there is a tutorial training level, the gameplay will be quite familiar to first-person shooter players. This “magic FPS” lets you cast magic spells from your hand that feel like you’re taking shots with modern weapons. It’s a bit like BioShock in its interface and the gunplay, I means spellplay, was pretty good with a solid vibration every time I fired. I enjoyed the type and variety of enemies, and they weren’t so easy to dispatch. The game looks beautiful as an Unreal Engine 5 title. After I played, I spoke with Robbins, who was the former senior creative director at Sledgehammer Games, the maker of titles such as Call of Duty: WWII. He started Ascendant Studios, which has more than 200 people, five years ago. Here’s an edited transcript of our interview. GamesBeat: Can you tell me where the inspiration for this came from? How long have you been working on it? Bret Robbins: I started this company five years ago, but I was thinking about the game probably seven or eight years ago. I think the initial inspiration came from my time on Call of Duty. Learning how to make a big blockbuster shooter, what that meant, and then looking around and not seeing anything in the fantasy genre like that. That seemed like a huge missed opportunity. I was surprised that no one was making anything like that, and so I decided I wanted to make it. GamesBeat: How big an effort has it been, then? Robbins: It started as a company of one. I started hiring. Probably for the first two years we were around 30 to 40 people or so. We started hiring up at that point, and today Ascendant is more than 200 people. GamesBeat: How did you match up with Electronic Arts? Robbins: About two, two and a half years into production we had finished a combat prototype. In my view, it proved out what the core of the game was going to be. It was a lot of fun to play. It showed what it meant to do magic as your guns and all of that. It had a bunch of interesting abilities and a bunch of enemies. That got us on EA’s radar. I knew we would need a marketing partner pretty soon. We were going into full production. We talked to them and they were excited about the game. We partnered up in the beginning of 2022. It’s been a great partnership. We’re very happy with them. Ascendant is funding the game development entirely and we own the IP, but EA Originals has been a great publishing and marketing partner for us. GamesBeat: So the colors of magic are blue, red, green–are there other colors too? Robbins: Just blue, red, and green. GamesBeat: Does that sort of correspond to sniper, shotgun, assault rifle? Robbins: For the primary spells, yeah. But then there are the controls and the furies. There are quite a few other options there. GamesBeat: But it parallels the gun combat in Call of Duty? Robbins: I wanted the game to have some familiarity and accessibility for players who like shooters. I didn’t want to completely reinvent the wheel. I wanted to have a good foundation where we could build all of our unique abilities and combat mechanics. We were constantly walking that line between keeping it familiar, but also bringing in something new. That was challenging, but it ended up working. GamesBeat: And this is running at 60 frames per second? It seemed very fast. Robbins: Yeah, yeah. It’s 60. We’re doing 60 frames on all platforms. GamesBeat: It seems like a big undertaking for a startup. Were there some lessons you learned that helped you tackle such a big game? Robbins: A couple of things that I think helped us–one, we had a very clear vision from the beginning. I’d spent a few months before I hired anyone on just writing out a game design document, a story treatment, game pillars, things that would be important to the game. If you read those documents today, they’re very much what we ended up making. Projects can get into a lot of trouble, a lot of rough water, when the vision shifts and changes. Just that consistency of vision helped us a lot. It made everything a lot clearer for everyone. And then honestly the fact that we were a smaller team made us agile. It allowed us to move fast and fail fast. GamesBeat: Were there other inspirations besides wanting to do a fantasy game? Robbins: Because this was really an opportunity for me to make my own game, I took a lot of different inspirations from lots of different sources and put them in a blender. I had a lot of things come out that were interesting and different. There weren’t a lot of singular inspiration points. Call of Duty, the fast-paced combat of Call of Duty, that was certainly in my mind. Games like BioShock that create such a great world. GamesBeat: The BioShock influence was familiar. The colors make it a lot easier to follow. Robbins: Yeah, that was a very early idea, to do the red, green, and blue and have them each have their own identity. When you’re working on creating something that’s based on magic, you can go in a million different directions. Pretty quickly I realized we needed to put our own rules in place, or else this was going to be a mush of nonsense. The red, green, and blue came in for that reason. That helped focus everything. GamesBeat: When players get more used to it, what kinds of things are they doing? I didn’t pick up on using the furies very much yet. Robbins: The furies are hugely important. They’re sort of our showstopper spells. As you progress through the game you get more familiar with how to combo spells together, which spells work great in tandem together. I’m going to lash you in and use red blast up close. I’m going to use vortex to pull a bunch of enemies together and then use my shatter spell to blow them all up at once. Opportunities like that, you start to become familiar with them. There’s definitely skill involved in spell selection. You can also find spells that you really like and invest in them through the gear system or through talents and make them more powerful. GamesBeat: I did notice, with the one boss I found so far, that headshots were very important. Robbins: Yeah, critical shots like that are part of the personality of the blue magic. The more sharpshooter-accurate blue blasts–there are talents around critical hits and doing more damage with critical hits. Not every character’s critical spot is a head, though. There are other creatures where you’ll need to hit them somewhere else. GamesBeat: How big would you say the game is? Robbins: If you’re just trying to power through, enjoy the story, and not really engaging in the side content, it’s probably a solid 20 hours or so. Maybe a little more. If you want to go off the beaten path and explore, we have a lot there. It’s probably 30 or 40 hours. There’s a lot of side challenges and things you can find, especially toward the endgame. Hidden bosses and things like that. GamesBeat: Is there multiplayer? Robbins: There’s no multiplayer, no. Single-player only. We’ve certainly talked a lot about it and done a bit of prototyping. But for this first game–single-player is something I’m very passionate about. It’s something I’ve done for my entire career. I knew I wanted a big campaign to introduce people to this world and this franchise. I decided to focus on that. GamesBeat: Does EA seem like a strange partner in that way? Robbins: Not so much. They just did the Dead Space remake. Jedi Survivor just came out. That’s a great single-player game. I think they’re embracing it quite a bit, and I’m glad they are. Every year you look at the top 10 best-selling triple-A games, there’s always single-player in the list. Usually quite a few. I think it’s here to stay. GamesBeat: The lore does seem interesting. What do you think you brought in that way that’s special, the background of this world? Robbins: My lead writer, myself, and actually we have a lore writer as well–we all spent a lot of time on the world, the backstory, the characters. It was important that everything felt believable and consistent, that you were really in a world. I like world-building myself. I find it a lot of fun. We spent a lot of time on that. GamesBeat: The different kinds of bosses, how would you group them or describe them? Robbins: Well, there are several bosses in the game, and they’re each sort of different. Sometimes in an early level a boss will become a recurring enemy later in the game. But the bosses were important to mix up the gameplay and have impressive moments in the experience. GamesBeat: With the way game development technology has evolved, was there anything particular that helped you in that area? Did AI arrive in time to help you? Robbins: Not so much? Mostly it was just working in Unreal 5. We’re on the cutting edge a bit there. It has some powerful tools and features. We got up to speed on that and that helped us quite a bit, making the game look and play as well as it does. GamesBeat: Did you start with Unreal 4 and switch over? Robbins: Yeah, we started on 4 and then migrated over to 5. We were early adopters. The Nanite feature is pretty powerful. It’s on-the-fly LOD, which allows you to get higher fidelity in your geo. And then Lumen, the dynamic lighting system, is really powerful. That helps workflow. It helps you be able to iterate faster and it makes the world look beautiful. Those two features in particular were very powerful. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,177
2,023
"DealHub Supercharges its CPQ Platform with Subscription Management Built Natively on HubSpot Sales Hub CRM | VentureBeat"
"https://venturebeat.com/business/dealhub-supercharges-its-cpq-platform-with-subscription-management-built-natively-on-hubspot-sales-hub-crm"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release DealHub Supercharges its CPQ Platform with Subscription Management Built Natively on HubSpot Sales Hub CRM Share on Facebook Share on X Share on LinkedIn AUSTIN, Texas–(BUSINESS WIRE)–May 30, 2023– DealHub.io, the leading provider of NextGen CPQ, announces today the release of DealHub Subscription Management for HubSpot Sales Hub CRM. The new platform addition runs natively within Sales Hub and expands the ability of DealHub CPQ to support all recurring revenue streams. Managing recurring revenue models and dynamic subscription plans has become increasingly complex. DealHub Subscription Management simplifies the process by automating and facilitating critical subscription actions, including renewals, upgrades, downgrades, co-termed expansions, cancellations and other contract amendments. With its centralized view of the entire subscription lifecycle and residing directly within Sales Hub, DealHub Subscription Management provides organizations of all sizes the power to streamline quoting and contracting processes, optimize revenue potential, and effortlessly support flow-through to their billing, revenue recognition and invoicing systems. “We’ve experienced a massive growth in DealHub CPQ customers running on Sales Hub as HubSpot’s CRM continues to rapidly gain market adoption,” said Eyal Elbahary, DealHub.io CEO. “The availability of DealHub Subscription Management will provide even greater value for Sales Hub customers needing to manage and support recurring revenue streams.” DealHub Subscription Management is a key component of DealHub’s comprehensive revenue platform , which also includes CPQ (Configure, Price, Quote), CLM (Contract Lifecycle Management), Digital DealRooms, Document Generation and e-Signature. Trusted by leading organizations worldwide across multiple industries including SaaS, hardware, manufacturing, and services, DealHub empowers businesses to drive revenue growth and improve revenue efficiency. To learn more about how DealHub Subscription Management can transform your subscription management processes on HubSpot CRM, visit here or schedule a demo with our team of experts. About DealHub DealHub offers the most complete and connected quote-to-revenue solutions for sales organizations. Our low-code platform empowers visionary leaders to connect their teams and processes, execute deals faster, and create accurate and predictable pipelines. With a unified CPQ, CLM , Billing and Subscription Management stack powered by a guided selling playbook , teams can generate spot-on quotes, accelerate contract negotiations, and sign off bigger deals. Using a DealRoom , they can centralize buyer/seller communications to deliver the most innovative buyer experience and drive deals to success. For more information, visit dealhub.io or follow DealHub on LinkedIn. View source version on businesswire.com: https://www.businesswire.com/news/home/20230530005333/en/ Gideon Thomas [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,178
2,023
"Angeles Equity Partners Appoints Seasoned Automation Executive David Carr as CEO of RōBEX | VentureBeat"
"https://venturebeat.com/business/angeles-equity-partners-appoints-seasoned-automation-executive-david-carr-as-ceo-of-robex"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Angeles Equity Partners Appoints Seasoned Automation Executive David Carr as CEO of RōBEX Share on Facebook Share on X Share on LinkedIn LOS ANGELES–(BUSINESS WIRE)–May 31, 2023– Angeles Equity Partners, LLC (“Angeles”), a private investment firm focused on value creation through operational transformation, announced the appointment of David Carr as CEO of RōBEX LLC (“RōBEX”), a precision integrator of industrial robots. With the addition of David to the RōBEX team, Angeles is taking the next step in its strategic plan to accelerate growth in the industrial automation and robotics sector. “David possesses an exceptional combination of engineering credentials, automation experience, and senior leadership responsibilities within reputable organizations that have prepared him for success in his new role. We are confident that David will position the business strategically and operationally to achieve its full potential,” said Matt Hively, Operating Partner at Angeles Operations Group. Carr has a track record of leading teams to deliver extraordinary results in the robotics and automation industries. He began his career as a systems engineer at ORMEC Systems, an automation controls provider, where he spent 17 years in a variety of leadership positions. Carr subsequently spent more than a decade at Danaher, where he possessed a number of key roles, including general manager of the Sonix and Setra Systems business units. Most recently, he was with AMETEK, leading the Haydon Kirk Pittman brands for more than six years. “This is a great time to be in automation. I am energized to partner with Angeles and the RōBEX team to realize the outstanding potential of this business. Our goal is to become the trusted partner for our customers, providing the design, installation and support they need for successful robotic automation deployments,” said Carr. About Angeles Equity Partners, LLC Angeles Equity Partners, LLC is a specialist lower middle-market private equity investment firm with a consistent approach to transforming underperforming industrial businesses. In partnership with Angeles Operations Group, LLC, the Angeles skill set drives the firm’s investment philosophy and, in its view, can help businesses reach their full potential. Learn more online at www.angelesequity.com. About RōBEX LLC RōBEX LLC began as a robotic material handling integrator founded in 2015 in Perrysburg, Ohio. Along with the acquisition of +Vantage Corporation and Mid-State Engineering, RōBEX has distinguished itself as a market leader in diverse automation, inspection, assembly and systems integration solutions within several key industries such as automotive, aerospace, food and beverage, glass and plastic packaging, and pick, pack and palletizing. The company leverages its expertise to bring value to customers through robotic solutions that improve productivity and safety. For more information, go to https://robex.us/. If you would like more information, please email [email protected]. This is not an offer or solicitation to sell securities. View source version on businesswire.com: https://www.businesswire.com/news/home/20230531005397/en/ Michelle Barry Chameleon Collective for Angeles +1 (603) 809-2748 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,179
2,023
"Want to easily deploy an open-source LLM? Anyscale's Aviary project takes flight | VentureBeat"
"https://venturebeat.com/ai/want-to-deploy-an-open-source-llm-easily-anyscales-aviary-project-takes-flight"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Want to easily deploy an open-source LLM? Anyscale’s Aviary project takes flight Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Anyscale , the lead commercial vendor behind the open-source Ray machine learning (ML) scaling technology, is launching the new open-source Aviary project today to help simplify open-source large language model (LLM) deployment. There are a growing number of open source LLMs, including Dolly , LLaMA , Carper AI and Amazon’s LightGPT , alongside dozens of others freely available on Hugging Face. But, simply having an LLM isn’t enough to make it useful for an organization — the model still needs to actually be deployed on infrastructure to enable inference and real world usage. Getting an open-source LLM model deployed onto infrastructure has often been a bespoke process of trial and error as developers figure out the right compute resources and configuration parameters. It’s also not easy for developers to simply compare one model with another. These are some of the challenges Anyscale is looking to help solve with Aviary. “Every week, new open-source models are released that people are trying out that are pushing the state of the art,” Anyscale CEO Robert Nishihara told VentureBeat. “Where there hasn’t been as much progress and what has lagged behind in our view, is the open-source infrastructure for actually running those models.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How Aviary works to ease open source LLM deployments The Aviary project builds on top of the open-source Ray project with a set of optimizations and configurations to ease LLM deployment of open-source models. Ray is already widely used by large organizations for model training and is the technology that OpenAI uses for its models including GPT-3 and GPT-4. The goal with Aviary is to automatically enable users of open source LLMs to deploy quickly with the right optimizations in place. Nishihara explained that there are many different things that need to be configured on the infrastructure side, including model parallel inference across multiple GPUs, sharding and performance optimizations. The goal with Aviary is to have pre-configured defaults for essentially any open-source LLM on Hugging Face. Users don’t have to go through a time consuming process of figuring out infrastructure configuration on their own; Aviary handles all that for them. Aviary also aims to help solve the challenge of model selection. With the growing number of models, it’s not easy for anyone to know the best model for a specific use case. Nishihara said that by making it easier to deploy open-source LLMs, Aviary is also making it easier for organizations to compare different LLMs. The comparisons enabled via Aviary include accuracy, latency and cost. As new LLMs emerge, Aviary will enable them quickly Aviary has been in private development at Anyscale for the last three months. Initially it took a bit of time to get the right configuration for any one open-source LLM , but what has become clear is that there are common patterns across all LLMs for deployment. Nishihara said that when LightGPT became available, Aviary was able to add support for it in less than five minutes. He explained that there are a few different standard architectures that all open-source LLMs conform to in terms of how they handle model parallelism and other critical aspects of deployment. “We don’t have to handle hundreds of special cases,” said Nishihara. “In fact, you just have to handle each of the standard model architectures and then all of the different LLMs fall into one of those categories.” Overall, Nishihara expects that the number of open-source models is only going to grow and as a result, the problem of selecting models will only become harder for organizations. “Our hope with Aviary is, with it being open source, anyone from the community who wants to will be able to just easily add new models,” he said. “That’ll make it easy for anyone using Aviary to just deploy those models without having to really do any extra work.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,180
2,023
"Vectara aims to ground generative AI conversational search without hallucinations | VentureBeat"
"https://venturebeat.com/ai/vectara-aims-to-ground-generative-ai-conversational-search-without-hallucinations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Vectara aims to ground generative AI conversational search without hallucinations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Vectara is continuing to grow as an AI powered conversational search platform with new capabilities announced today that aim to improve generative AI for business data. The Santa Clara, Calif.- based startup emerged from stealth in Oct. 2022, led by the former CTO and founder of big data vendor Cloudera. Vectara originally branded its platform as a neural search-as-a-service technology. This approach combines AI-based large language models (LLMs), natural language processing (NLP), data integration pipelines and vector techniques to create a neural network that can be optimized for search. Now, the company is expanding its capabilities with generative AI that provide summarization of results for a more conversational AI experience. The company is also adding what it calls “grounded generation” capabilities in a bid to help reduce the risk of AI hallucinations and improve overall search accuracy. “It’s all about moving from legacy, which is a search engine that gives you a list of results, and what ChatGPT opened our eyes to, which is that all consumers want is the answer,” Vectara CEO and cofounder Amr Awadallah told VentureBeat. “We just want the answer, don’t give me a list of results and I have to go read to figure out what I’m looking for — just give me the answer itself.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Alongside the new features, Vectara announced that it has closed a seed round of $28.5 million. The closed seed round includes $20 million that Vectara had previously announced in Oct. 2022. The funding was led by Race Capital, with new strategic board of advisors including Databricks CTO Matei Zaharia. Generative AI-powered search is increasingly competitive When Vectara first emerged in 2022, there were few competitors in the generative AI search space — but that has changed dramatically in just a short time in 2023. In recent months, Google has entered the space with a preview of its Generated Search Experience that was announced at the Google I/O conference on May 15. Microsoft’s Bing integrated with OpenAI to provide a generative AI experience as well. Elasticsearch has also been expanded to integrate generative AI with an update announced on May 23. Awadallah is well aware of the increasingly competitive landscape and is confident in his firm’s differentiation. A core element of the Vectara platform is what is known as a “retrieval engine,” the technology that matches the right semantic concepts with entries in a vector database. The original basis for the Vectara retrieval engine comes from research that Awadallah’s co-founder Amin Ahmad did in 2019 while at Google. This was described in a 2019 paper , Multilingual Universal Sentence Encoder for Semantic Retrieval. Awadallah explained that Vectara has improved on that original design, providing a highly accurate retrieval system. What grounded generation is all about Prior to the new update, the search platform provided users with a list of results that benefited from both semantic keyword and AI capabilities. The list of results however was still just a list that a user had to look through to get an answer. With the platform update, users can now get a generative AI result that will summarize the most relevant sources to provide an answer to a query. Generative AI results, such as those from ChatGPT, can potentially have a risk of AI hallucination, where an inaccurate result will be shown. Awadallah explained that hallucinations occur in LLMs because the model has compressed a vast amount of information and can potentially generate an answer that is not true. To help solve that issue, Vectara has integrated a grounded generation approach, which other vendors sometimes refer to as retrieval augmented generation. The basic idea is that generated results are associated with a source citation to help improve accuracy and to direct users to more information from the original source. Zero shot ML The Vectara platform also uses what is known as a “zero shot” machine learning (ML) approach that enables the model to continuously learn from new data, without the need for more consuming fine tuning and retraining. “As data is coming in, within a few seconds that data is already part of the mix and it will be reflected in the answers that are being generated by the engine,” said Awadallah. Overall, he emphasized that the strategy for his company is to help businesses not just find the right search results, but to deliver actions for end users. “The longer term belief is we’re moving from search engines to answer engines,” said Awadallah. “Right now what we’re doing is ‘answer engines’ — meaning I don’t give you back a list of results, I’m giving you back the answer. But if you get the answers to be truly accurate, we can move from answer engines to action engines.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,181
2,023
"Top AI researchers and CEOs warn against 'risk of extinction' in joint statement | VentureBeat"
"https://venturebeat.com/ai/top-ai-researchers-and-ceos-warn-against-risk-of-extinction-in-joint-statement"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Top AI researchers and CEOs warn against ‘risk of extinction’ in joint statement Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A group of the world’s leading artificial intelligence (AI) experts — including many pioneering researchers who have sounded alarms in recent months about the existential threats posed by their own work — released a sharply worded statement on Tuesday warning of a “ risk of extinction ” from advanced AI if its development is not properly managed. The joint statement, signed by hundreds of experts including the CEOs of OpenAI , DeepMind and Anthropic aims to overcome obstacles to openly discussing catastrophic risks from AI, according to its authors. It comes during a period of intensifying concern about the societal impacts of AI, even as companies and governments push to achieve transformative leaps in its capabilities. “AI experts, journalists, policymakers and the public are increasingly discussing a broad spectrum of important and urgent risks from AI,” reads the statement published by the Center for AI Safety. “Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion.” Luminary leaders recognize concerns The signatories include some of the most influential figures in the AI industry, such as Sam Altman, CEO of OpenAI; Dennis Hassabis, CEO of Google DeepMind; and Dario Amodei, CEO of Anthropic. These companies are widely considered to be at the forefront of AI research and development, making their executives’ acknowledgment of the potential risks particularly noteworthy. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Notable researchers who have also signed the statement include Yoshua Bengio, a pioneer in deep learning; Ya-Qin Zhang, a distinguished scientist and corporate vice president at Microsoft; and Geoffrey Hinton, known as the “godfather of deep learning,” who recently left his position at Google to “ speak more freely ” about the existential threat posed by AI. Hinton’s departure from Google last month has drawn attention to his evolving views on the capabilities of the computer systems he has spent his life researching. At 75 years old, the renowned professor has expressed a desire to engage in candid discussions about the potential dangers of AI without the constraints of corporate affiliation. Call to action The joint statement follows a similar initiative in March when dozens of researchers signed an open letter calling for a six-month “pause” on large-scale AI development beyond OpenAI’s GPT-4. Signatories of the “pause” letter included tech luminaries Elon Musk, Steve Wozniak, Bengio and Gary Marcus. Despite these calls for caution, there remains little consensus among industry leaders and policymakers on the best approach to regulate and develop AI responsibly. Earlier this month, tech leaders including Altman, Amodei and Hassabis met with President Biden and Vice President Harris to discuss potential regulation. In a subsequent Senate testimony , Altman advocated for government intervention, emphasizing the seriousness of the risks posed by advanced AI systems and the need for regulation to address potential harms. In a recent blog post , OpenAI executives outlined several proposals for responsibly managing AI systems. Among their recommendations were increased collaboration among leading AI researchers, more in-depth technical research into large language models (LLMs), and the establishment of an international AI safety organization. This statement serves as a further call to action, urging the broader community to engage in a meaningful conversation about the future of AI and its potential impact on society. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,182
2,023
"StoryKit releases text-to-video AI creation tool for enterprise customers | VentureBeat"
"https://venturebeat.com/ai/storykit-announces-ai-text-to-video-tool"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages StoryKit releases text-to-video AI creation tool for enterprise customers Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat generated with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. If you have ever worked in digital marketing , or even just wanted to create some videos to show off your small business or creative endeavor, you know that it’s a time-intensive and often finicky task. Not only must you conceive of the video, draft a script, film or pull the footage needed — you must then put it all together and edit it to fit your distribution channels: web, YouTube, Instagram, TikTok, LinkedIn, Facebook, Twitter. Each of these social networks and platforms has its own video publishing options and recommended aspect ratios, so cutting a video to fit multiple platforms is a challenge in-and-of-itself. But what if there was a way to simplify all that: Just copy-and-paste existing text — say, a press release or training document — and generate a video directly from it, automatically sized to fit multiple platforms? That’s the promise of the new in-browser, text-to-video AI generation feature announced today by Storykit , a Swedish software-as-a-service (SaaS) company. Video from any text — even meeting notes The new feature from Storykit allows marketers and anyone interested in generating video to quickly convert written content into SEO-ready video campaigns. The user can simply paste text into Storykit’s web-based “AI script creator” field and the tool generates a new video script from it. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A spokesperson for StoryKit told VentureBeat via email that the feature is built atop OpenAI’ s GPT-3, 3.5-turbo and GPT-4, “for the script writing part,” but that a “proprietary AI, Coen” is used “for storyboard and video building.” Storykit says the tool can turn any text — from product descriptions to blog archives to even the most minimal notes — into video. As the company website reads: “Haven’t even got a text? Just the bare bones of an idea? A half-mumbled voice note? That works, too.” Storykit’s AI pairs text with appropriate images and clips to generate comprehensive videos while preserving brand messaging and identity. Fredrik Strömberg, Storykit CPO and founder, says that “users can trust that their messaging remains on-brand since their videos are based on their own input.” VentureBeat’s test of the new software resulted in mixed success. Initially the tool provided an error message saying it was “experiencing too many requests,” but within a few moments, we were able to generate a competent video script from the StoryKit news release itself. Storykit’s new text-to-video AI tool also allows enterprise customers to instantly translate their script and video into multiple languages. Helpful drop down menus contain categories for the types of videos that enterprise customers large and small might want to generate: recruitment, case stories, e-commerce — even experimental. Storykit’s strategy Storykit believes this could democratize the video creation process and streamline workflows for companies across the globe. Storykit CEO and founder Peder Bonnier calls it a “game changer.” Founded in 2018, Storykit today claims more than 1,000 customers including BKS Bank , which uses the company’s video creation capabilities to produce high-quality content at scale. According to Bonnier, the new AI tool is intended to broaden the video creation process, making it accessible to individuals outside of specialized roles. “Input any source material into the tool, choose which output you want — then you’re done,” said Bonnier. “This means that video creation is no longer a specialized role but a task anyone can do.” The company does not list pricing publicly on its website (it has different price tier options that customers can inquire about). It does offer a free trial version of its software allowing customers to make multiple videos. According to the company, Storykit seeks to allow its customers to capitalize on the popularity of text-driven videos on social media, which many customers watch without sound. Its new text-to-AI tool joins others in the fast-moving space, including Runway ML ‘s new mobile app launched last month. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,183
2,023
"Secrets of using AI and data to supercharge customer engagement | VentureBeat"
"https://venturebeat.com/ai/secrets-of-using-ai-and-data-to-supercharge-customer-engagement"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Secrets of using AI and data to supercharge customer engagement Share on Facebook Share on X Share on LinkedIn Presented by Twilio The modern business playbook addresses a wide range of strategies and metrics for gaining and retaining customers: lead generation, return on ad spend, conversion, customer retention, customer lifetime value. But there is a dimension that stands apart, and that’s customer engagement. This refers to using first-party data and real-time personalization to connect with and engage customers at every stage of their journey, from prospect to buyer to user. By breaking down silos between marketing, sales and support applications, companies that create a unified view of each customer — with highly detailed, real-time data on those customers’ activities — are able to deliver far more intimate, personalized experiences that result in higher levels of customer engagement. Customer engagement lift, in turn, leads to increases in many other metrics. Research from Twilio has found that customer engagement is the key to unlocking customer retention, conversion and long-term loyalty. Our recent State of Customer Engagement Report found that investment in digital customer engagement generated an average 90% revenue increase in 2023. Such investments also increased companies’ abilities to address changing market conditions. What’s more, companies that were the most advanced at customer engagement (through the use of personalization, first-party customer data, and other indicators) more easily met and exceeded their overall financial goals. The race to capitalize on AI Artificial intelligence is transforming every sphere of life, and customer engagement is no exception. Twilio’s State of Personalization Report found that 92% of companies are already using AI to power personalization to some degree. But AI is only as good as its underlying data — and without good-quality data that helps brands truly understand their customer, customer experiences will continue to miss the mark. Half of companies surveyed say that getting accurate data for personalization is a struggle, an increase of ten percentage points compared to 2022. Meanwhile 31% of respondents said that poor quality data is a major obstacle in leveraging AI. To improve AI results and personalization overall, companies need to invest in data quality, leveraging effective, real-time data management tools, and continuing to increase their use of first-party data, not third-party data “rented” from social networks, search engines or data brokers. The power of personalization: Nextdoor case study What does this look like in practice? Consider Nextdoor , an online platform that connects real people with one another and the neighborhoods that matter to them to build community, share information and news, and create a sense of belonging. Nextdoor is currently available in 11 countries, and in the U.S., it’s used by 1 in 3 households. Nextdoor works with a network of advertisers to deliver ads to its audience of users. Because neighbors are influenced by their communities and often come to Nextdoor with high intent for products and services, it’s important for ads to be highly targeted geographically, with localization at the network level. Nextdoor gives advertisers the opportunity to personalize ads to individual neighborhoods, making them more relevant and authentic to the community. Nextdoor also leverages business recommendations — over 3.9M businesses with claimed pages across the platform — to alert local small businesses to opportunities to target neighborhoods. If 15 people in a neighborhood have already recommended a particular pizza shop, the shop’s owners might welcome the opportunity to reach all 3,000 people in that neighborhood. In a test performed in 2023, Nextdoor found that when they added a layer of personalization to their campaigns, the clickthrough rate rose by 8%. But remarkably, personalization also improved the click-to-conversion ratio by 34%. In fact, thousands of companies are learning that personalization, when done right, supercharges their conversion rates, share of customer wallet and loyalty. Twilio’s research found that 86% of consumers say that personalized experiences increase their loyalty to brands, and consumers spend on average 21% more on brands that personalize. On the flip side, 66% of consumers say they will stop using a brand if their experience is not personalized. Personalization, in this case, doesn’t mean putting $cust_firstname in the subject line of an email. It means delivering relevant recommendations for things a customer might actually be interested in, at the right time, through the channel that the customer prefers, be it email, SMS, or some other means. It means helping the customer accomplish goals that are meaningful to them. It means reminding them to complete a purchase they actually intended to make, not blasting them with irrelevant retargeting ads just because they stopped by your website once. Focus on activating data It all comes back to the data. But to improve customer engagement, it’s not enough simply to collect it: Companies need to activate that data in real time. That means making sense of it, connecting data from different sources or applications, creating unified “golden” profiles of each customer, and building campaigns and engagements that make use of that data. Twilio’s customer engagement research suggests that brands should accelerate their shift away from third-party data and toward first-and zero-party data. Nearly one third of consumers always or often reject cookies on websites, while nearly two thirds (65%) of consumers would prefer brands use only first-party data to personalize their experiences. This also lays the foundation for brands to capitalize on emerging predictive and generative AI, by combining large language models with proprietary datasets of accurate first-party data to speed up campaign creation and better support interactions. When it comes to personalization, invest in real-time personalization for the best results. This means delivering meaningful targeting and customization that is based on the most up-to-date data possible, including sales and product interactions. Do all that, and your company can experience the same kinds of customer engagement benefits that Nextdoor and other leading companies have seen. Learn how to activate your data to unlock more customer value. Katrina Wong is VP Marketing at Twilio Segment. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,184
2,023
"One-third of people can't tell a human from an AI. Here's why that matters | VentureBeat"
"https://venturebeat.com/ai/one-third-of-people-cant-tell-a-human-from-an-ai-heres-why-that-matters"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages One-third of people can’t tell a human from an AI. Here’s why that matters Share on Facebook Share on X Share on LinkedIn Image by Canva Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today OpenAI-rival AI21 Labs released the results of a social experiment, an online game called “ Human or Not , ” which found that a whopping 32% of people can’t tell the difference between a human and an AI bot. The game, which the company said is the largest-scale Turing Test to date, paired up players for two-minute conversations using an AI bot based on leading large language models (LLMs) such as OpenAI’s GPT-4 and AI21 Labs’ Jurassic-2, and ultimately analyzed more than a million conversations and guesses. The results were eye-opening: For one thing, the test revealed that people found it easier to identify a fellow human — when talking to humans, participants guessed right 73% of the time. But when talking to bots, participants guessed right just 60% of the time. Educating participants on LLM capabilities But beyond the numbers, the researchers noted that participants used several popular approaches and strategies to determine if they were talking to a human or a bot. For example, they assumed bots don’t make typos, grammar mistakes or use slang, even though most models in the game were trained to make these types of mistakes and to use slang words. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Participants also frequently asked personal questions, such as “Where are you from?”, “What are you doing?” or “What’s your name?”, believing that AI bots would not have a personal history or background, and that their responses would be limited to certain topics or prompts. However, the bots were mostly able to answer these types of questions, since they were trained on a lot of personal stories. After the two minute conversations, users were asked to guess who they had been speaking with — a human or a bot. After over a month of play and millions of conversations, results have shown that 32% of people can’t tell the difference between a human and AI. And in an interesting philosophical twist, some participants assumed that if their discussion partner was too polite, they were probably a bot. But the purpose of ‘Human or AI’ goes far beyond a simple game, Amos Meron, game creator and creative product lead at the Tel Aviv-based AI21 Labs, told VentureBeat in an interview. “The idea is to have something more meaningful on several levels — first is to educate and let people experience AI in this [conversational] way, especially if they’ve only experienced it as a productivity tool,” he said. “Our online world is going to be populated with a lot of AI bots , and we want to work towards the goal that they’re going to be used for good, so we want we want to let people know what the technology is capable of.” AI21 Labs has used game play for AI education before This isn’t AI21 Labs’ first go-round with game play as an AI educational tool. A year ago, it made mainstream headlines with the release of ‘Ask Ruth Bader Ginsburg,’ an AI model that predicted how Ginsburg would respond to questions. It is based on 27 years of Ginsburg’s legal writings on the Supreme Court, along with news interviews and public speeches. ‘Human or AI’ is a more advanced version of that game, said Meron, who added that he and his team were not terribly surprised by the results. “I think we assumed that some people wouldn’t be able to tell the difference,” he said. What did surprise him, however, was what it actually teaches us about humans. “The outcome is that people now assume that most things humans do online may be rude, which I think is funny,” he said, adding the caveat that people experienced the bots in a very specific, service-like manner. Why policymakers should take note Still, with U.S. elections coming down the pike, whether humans can tell the difference between another human and an AI is important to consider. “There are always going to be bad actors, but what I think can help us prevent that is knowledge,” said Meron. “People should be aware that this technology is more powerful than what they have experienced before.” That doesn’t mean that people need to suspicious online because of bots, he emphasized. “If it’s a human phishing attack, or a human with a [convincing alternate] persona online, that’s dangerous,” he said. Nor does the game tackle the issue of sentience, he added. “That’s a different discussion,” he said. But policymakers should take note, he said. “We need to make sure that if you’re a company and you have a service using an AI agent , you need to clarify whether this is a human or not,” he said. “This game would help people understand that this is a discussion they need to have, because by the end of 2023 you can assume that any product could have this kind of AI capability.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,185
2,023
"Hyro doubles down on plug-and-play AI assistants with $20M funding | VentureBeat"
"https://venturebeat.com/ai/hyro-doubles-down-on-plug-and-play-ai-assistants-with-20m-funding"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hyro doubles down on plug-and-play AI assistants with $20M funding Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Hyro , an adaptive communications company that offers plug-and-play AI assistants for enterprises, today announced it has raised $20 million in a series B funding round. The company said it will use the round — which was led by Macquarie Capital — to hire across departments and build out its offering for conversational and generative AI-driven call centers , websites or mobile applications. It will also expand strategic partnerships, integrations and use cases across key industries. The funding comes as enterprise interest in AI for streamlining business functions and end-user experiences continues to soar. In a recent survey conducted by Accenture , 98% of global executives agreed that AI foundation models will play an important role in their organization’s strategies in the next 3 to 5 years. Hyro plans to capitalize on this. How does Hyro’s AI help? Founded in 2018, Hyro offers an enterprise platform that provides plug-and-play tools to help companies implement a layer of conversational AI assistants on top of their existing omnichannel workflows without any kind of coding. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Hyro’s key differentiator is our ability to come to the first meeting with a prospect and show them a fully functional AI assistant built on their own data, without them needing to do anything at all,” Hyro’s VP of marketing Aaron Bours tells VentureBeat. “Assistants that usually take 6-12 months to build (based on predefined intents, data training and ML) are ready to launch pending procurement, and that excites enterprise buyers who have been previously underwhelmed by chatbot companies.” Once an enterprise points Hyro towards a business logic it wants to achieve and its internal knowledge, the platform automatically scrapes unstructured and structured data and maps it to a knowledge graph with all the different entities and attributes already embedded, made queryable by natural language (through voice or text). Then, using this graph, it generates a conversational AI assistant that can be embedded across channels like websites, mobile apps, call centers and more. “When the content updates, the conversations update — that’s where we avoid maintenance/ IT resource deployment usually associated with NLP/NLU solutions,” Bours explained. Deployment in three days Hyro claims it can deliver AI assistants in a matter of three days with text and chat capabilities. It primarily focuses on the information-heavy healthcare industry, serving providers like Mercy Health , Baptist Health and Intermountain Healthcare , with AI agents automating tasks like patient registration, routing, scheduling, FAQs, IT helpdesk ticketing and prescription refills. This makes up roughly 60 to 70% of inbound calls and messages into health systems, the company said. Moreover, the AI assistants provided by Hyro are also paired with conversational intelligence , where organizations get an out-of-the-box dashboard to see performance engagement metrics, top trends, keywords, missing terms and explainability surrounding AI outputs. These reports are auto-generated, promoting full visibility and a constant feedback loop for optimization. How Hyro is planning ahead With this round (which takes Hyro’s total capital raised to $35 million), the company will build out its platform for the healthcare industry and focus on expanding with new strategic partnerships and more in-depth integrations with key CRMs and telephony systems. On the product side, the company will continue to refine NLU rates and upgrade the conversational intelligence capabilities of the platform with real-time alerts, upgrades and benchmarking. It will also invest in capturing more granular data points that allow for stronger reporting of trending topics, keywords and more. “For example, we can tell enterprises that they may be in for a spike in traffic for a certain service before it happens,” Bours explained. In addition to doubling down on the healthcare space, Hyro also intends to deploy go-to-market efforts and resources towards other highly-regulated industries like insurance, as these will also need explainable AI assistants to field repetitive calls and messages in light of growing workforce shortages. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,186
2,023
"ChatGPT launched six months ago. Its impact — and fallout — is just beginning | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/chatgpt-launched-six-months-ago-its-impact-and-fallout-is-just-beginning-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ChatGPT launched six months ago. Its impact — and fallout — is just beginning | The AI Beat Share on Facebook Share on X Share on LinkedIn Image by Canva Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A mere six months ago today, OpenAI released ChatGPT. Since then, it’s been a dizzying AI ride: The “interactive, conversational model” became the talk of the AI community within days and a global cultural phenomenon within weeks. In a new, unsettling twist, ChatGPT’s massive popularity can also be tied directly to today’s mainstream media headlines like “ AI Poses ‘Risk of Extinction,’ Industry Leaders Warn ,” as leaders from top AI labs like OpenAI, Google DeepMind and Anthropic warned in a 22- word statement that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” But clearly, the impact — as well as the fallout — of ChatGPT is just beginning. I think it’s worth marking the half-birthday of the world’s most well-known AI chatbot, which got the generative AI hype machine going at full speed. It has already inspired a wave of creative and business applications and become a central part of discussions about ethics, privacy, copyright, data security and misinformation. ChatGPT has generated comments in Senate hearings , inspired fear in Hollywood , freaked out teachers and gotten New York City lawyers in trouble — but also made millions excited about the opportunities to boost productivity, efficiency, creative ideation and knowledge management. Words like “hallucinations” and “prompt engineering” have become part of the public discourse, while job displacement concerns have exploded, and policymakers have sprinted to try and catch up to the sudden wave of powerful AI development. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! ChatGPT was a low-key surprise in November 2022 On November 30, 2022, OpenAI’s announcement was a low-key surprise. At the time, the AI community was mostly talking about what was going on at NeurIPS, a top machine learning and computational neuroscience conference, which was in full swing in New Orleans. There were whispers that details about GPT-4 were going to be revealed there, but instead, OpenAI suddenly announced a new model in the GPT family of AI-powered large language models , text-davinci-003, what it called the “GPT-3.5 series,” that reportedly improved on its predecessors by handling more complex instructions and producing higher-quality, longer-form content. And then, there it was: After the GPT-3.5 announcement, OpenAI launched an early demo of ChatGPT , another part of the GPT-3.5 series whose dialogue format made it possible “to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” In a blog post, OpenAI CEO Sam Altman wrote that language interfaces “are going to be a big deal, I think. Talk to the computer (voice or text) and get what you want, for increasingly complex definitions of ‘want’!” He cautioned that it is an early demo with “a lot of limitations — it’s very much a research release.” But, he added, “This is something that scifi really got right; until we get neural interfaces, language interfaces are probably the next best thing.” Altman’s comments immediately sent thousands of AI practitioners to their keyboards to try out the ChatGPT demo and immediately put the tech world in full swoon mode. Within days, Aaron Levie, CEO of Box , tweeted that “ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward.” Y Combinator cofounder Paul Graham tweeted that “clearly something big is happening.” Alberto Romero, author of The Algorithmic Bridge , calls it “by far, the best chatbot in the world.” And even Elon Musk weighed in, tweeting that ChatGPT is “scary good. We are not far from dangerously strong AI.” The hidden danger lurking within ChatGPT It didn’t take long, however, to identify the hidden danger lurking within ChatGPT: That is, it quickly spits out eloquent, confident responses that often sound plausible and true even if they are not. The model was trained, it was noted, to predict the next word for a given input, not whether a fact is correct. By the first week of December, Arvind Narayanan, a computer science professor at Princeton, pointed out in a tweet: “People are excited about using ChatGPT for learning. It’s often very good. But the danger is that you can’t tell when it’s wrong unless you already know the answer. I tried some basic information security questions. In most cases the answers sounded plausible but were in fact BS.” Even OpenAI’s Sam Altman admitted ChatGPT’s risks early on. “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” he tweeted on December 10. “It’s a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.” ChatGPT coverage has never slowed Those risks, however, did not stop the forward march of ChatGPT and other LLMs. By mid-December, experts were saying ChatGPT was having “an iPhone moment.” By January, a top AI conference banned the use of ChatGPT in paper submissions. In February, ChatGPT competitors like Anthropic’s Claude grabbed some of the spotlight, while Google and Microsoft launched dueling generative AI debuts with Bing and Bard. March brought a wave of chatbot-powered productivity apps , while pioneering AI researcher Yoshua Bengio called ChatGPT a “wake-up call” just in time for OpenAI to move ahead with GPT-4 in yet another surprise announcement. By April, open-source LLMs were having their own moment — and fierce debate. But nothing seems to be able to unseat the overwhelming popularity of ChatGPT in the public imagination, even though a recent Pew Research survey found that while a majority of Americans have heard of ChatGPT, few have tried it themselves. Happy half-birthday, ChatGPT. As a large language model, can you plan a proper celebration? VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,187
2,023
"Blink Ops launches AI copilot to streamline security automation | VentureBeat"
"https://venturebeat.com/ai/blink-ops-launches-ai-copilot-to-streamline-security-automation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Blink Ops launches AI copilot to streamline security automation Share on Facebook Share on X Share on LinkedIn Image Credit: Blink Ops Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Blink Ops , a cybersecurity startup based in Tel Aviv, has launched a new software product that uses generative AI to create no-code workflows for security and IT operations. The service, called Blink Copilot, allows security operators to automate any security workflow by writing simple text prompts. The company claims that Blink Copilot is the first of its kind in the market and that it can significantly reduce the time and effort required to automate security workflows. “Blink Copilot is using multiple different large language models (LLMs) at its core (Microsoft Cognitive Services, Google Bard, OpenAI) that are fine-tuned using Blink’s dataset of thousands of security and infrastructure workflows, as well hundreds of third party security integrations,” Blink Ops CEO and cofounder Gil Barak told VentureBeat. Automate security workflows using simple text prompts Starting today, Blink’s security platform now offers several features aimed at automating security. Its key updates include Blink Copilot, the AI system that can generate workflows based on text prompts; a drag-and-drop editor for customizing workflows; a library of more than 7,000 pre-built automated workflows for common security tasks; an integration hub for connecting the platform to other security tools; and a workflow engine for executing the automated processes at scale across on-premises and cloud environments. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Barak, security workflows that would normally take months to automate can now be built in seconds using Blink Copilot. The company also says that its internal team has already generated more than 7,000 workflows using the product and that it is publishing hundreds of new workflows every week. “For example, during a recent demo, I asked Blink Copilot to generate a workflow that monitors for new vulnerabilities in cloud infrastructure, automatically looks up the relevant engineers that can fix them and assigns a ServiceNow ticket with a due date of 48 hours,” said Barak. Similarly, he added, “teams might create complex workflows for onboarding new employees to different services with permissions pre-configured, or set up quarantine workflows for at-risk devices which automatically lock accounts and send notifications asking employees to confirm recent activities.” Making security automation accessible to everyone A shortage of cybersecurity professionals has made automation crucial for companies who must to defend against a high volume of cyberattacks, according to a 2022 McKinsey report. Blink Ops says its new AI system significantly lowers the barriers for automating security workflows that previously required months of labor. Barak pointed out that there are more than 3.4 million unfilled security roles, according to the 2022 (ISC)² Cybersecurity Workforce Study. “Humans will never be able to fill all those roles, so we’ll need to rely on human-guided automation to manage security workflows,” he said. “Blink Copilot will finally enable security teams to effectively automate and manage security workflows, regardless of team size.” The company said the Blink platform is designed for enterprise cybersecurity teams and includes features such as role-based permissions, support for on-premises and cloud environments and the ability to serve multiple customers. Blink Ops, which was founded in 2021, is backed by venture capital firms including Lightspeed Venture Partners and Entrée Capital. The company has offices in San Francisco in addition to its headquarters in Tel Aviv. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,188
2,023
"Aporia and Databricks partner to enhance real-time monitoring of ML models | VentureBeat"
"https://venturebeat.com/ai/aporia-and-databricks-partner-to-enhance-real-time-monitoring-of-ml-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Aporia and Databricks partner to enhance real-time monitoring of ML models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Machine learning (ML) observability platform Aporia today announced a strategic partnership with Databricks. According to the companies, the collaboration aims to empower customers who utilize Databricks’ lakehouse platform, AI capabilities and MLflow offerings by providing them with advanced monitoring features for their ML models. Organizations can now monitor their ML models in real-time by deploying Aporia’s new ML observability platform directly on top of Databricks, eliminating the need for duplicating data from their lakehouse or any other data source. Moreover, the integration with Databricks streamlines the monitoring process, according to the companies, allowing for the analysis of billions of predictions without the need for data sampling, making changes to production code or incurring hidden storage costs. “This means monitoring billions of predictions, visualizing and explaining ML models in production becomes effortless,” Aporia CEO Liran Hason told VentureBeat. “Aporia supports all use cases and model types, providing flexibility for ML teams to tailor the platform to their specific needs.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Real-time monitoring, customization The new offering allows monitoring for anomalies such as drift, bias, degradation and data integrity issues and triggers live alerts to popular communication channels, ensuring timely notifications. The platform also provides real-time customizable dashboards and metrics, enabling each ML stakeholder to prioritize their key areas of interest and translate data science metrics into tangible business impact. This is crucial in industries including lending, hiring and healthcare, Hason said, and promotes a fair and transparent landscape in automated decisions. “Organizations would now be able to manage all ML models under a single hub, regardless of deployment,” said Hason. “This centralization enhances collaboration, facilitates communication and streamlines model management, fostering continuous learning and efficient team workflows.” Streamlining data monitoring with ML Observability Organizations have traditionally encountered challenges when monitoring large volumes of data, often necessitating data duplication from their data lake to their monitoring platform. However, said Hason, this approach leads to inaccuracies, overlooked issues, drift, false positive alerts and difficulties in ensuring fairness and bias monitoring. The new integration with Databricks addresses these pain points by allowing organizations to monitor all their ML models on Databricks swiftly, within minutes. Additionally, the integration maximizes the benefits of existing database investments — even for use cases that involve processing extensive volumes of predictions, such as recommendation systems, search ranking models, fraud detection models and demand forecasting models. “There is no need to duplicate data onto a separate monitoring environment,” Hason explained. “This ensures a single source of truth derived directly from your data lake, simplifying data management and accelerating insights-to-actions. The integration enhances the effectiveness of ML model monitoring and provides flexibility and freedom for organizations to leverage their existing ML and data infrastructure to its full potential.” Numerous use cases The company said the new ML observability platform will support many use cases, including enhancing recommendation systems through real-time performance monitoring. Organizations can leverage Aporia to improve their search ranking algorithms, gaining insights into click-through rates and enhancing search results. In addition, Aporia’s real-time monitoring helps detect and prevent fraudulent activities, bolstering security and fostering customer trust. Furthermore, the platform ensures accurate predictions in supply chain management and retail by monitoring demand forecasting models, enabling teams to optimize their response to deviations from a forecasted demand. The platform’s observability capabilities will also assist financial institutions in monitoring credit risk models, ensuring accurate and unbiased credit assessments while identifying potential biases. The Databricks delta connector establishes a connection between Aporia and an organization’s Databricks delta lake, linking training and inference datasets to Aporia, Hason explained. The platform distinguishes itself in monitoring large-scale data by effortlessly handling billions of predictions without resorting to data sampling, said Hason. This ensures a comprehensive and precise assessment of model performance, which is particularly beneficial for organizations grappling with substantial data volumes. “No critical insights go unnoticed, guaranteeing thorough monitoring,” he added. Unleashing the power of data for informed decision-making Hason said that the partnership will assume a crucial role in propelling the wider adoption of observability in the AI and ML landscape, as it addresses existing demand and nurtures a deeper comprehension and acknowledgment of observability as a pivotal element in AI and ML. He said that the combination of a robust observability platform and a scalable data platform offers a compelling proposition for organizations investing in AI and ML. The enterprises are developing a unified tool that enhances observability at scale, empowering organizations to make informed decisions and optimize their AI initiatives. “The partnership is specifically designed to deliver a centralized, end-to-end, cost-effective solution, empowering organizations to make confident data-driven decisions,” added Hason. Organizations can monitor all production data in minutes, ensuring a rapid time-to-value. This accelerated implementation quickly unlocks the benefits of the investment. “These new functionalities can save organizations valuable resources that would otherwise be spent on troubleshooting and rectifying issues,” said Hason. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,189
2,023
"AI experts challenge 'doomer' narrative, including 'extinction risk' claims | VentureBeat"
"https://venturebeat.com/ai/ai-experts-challenge-doomer-narrative-including-extinction-risk-claims"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI experts challenge ‘doomer’ narrative, including ‘extinction risk’ claims Share on Facebook Share on X Share on LinkedIn Image by Canva Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Top AI researchers are pushing back on the current ‘doomer’ narrative focused on existential future risk from runaway artificial general intelligence (AGI). These include yesterday’s Statement on AI Risk , signed by hundreds of experts including the CEOs of OpenAI , DeepMind and Anthropic , which warned of a “ risk of extinction ” from advanced AI if its development is not properly managed. Many say this ‘doomsday’ take, with its focus on existential risk from AI, or x-risk, is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications and cybersecurity. The truth is, most AI researchers are not focused on or highly-concerned about x-risk, they emphasize. “It’s almost a topsy-turvy world,” Sara Hooker, head of the nonprofit Cohere for AI and former research scientist at Google Brain, told VentureBeat. “In the public discourse, [x-risk] is being treated as if it’s the dominant view of this technology.” But, she explained, at machine learning (ML) conferences such as the recent International Conference on Learning Representations (ICLR) in early May that attracts researchers from all over the world, x-risk was a “fringe topic.” “At the conference, the few researchers who were talking about existential threats said they felt marginalized because they were in the minority,” she said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Normalizing existential AI threats through repetition Mark Riedl, professor at the Georgia Institute of Technology, pointed out that those concerned with existential risk are not a monolithic group — they range from those convinced that we have crossed a threshold (like Eliezer Yudkowsky), to those who believe it is imminent and inevitable (like OpenAI’s Sam Altman). This is in addition to those in a wait-and-see mode, and those who don’t see an obvious path to AGI without some new breakthrough. But, said Riedl, statements by prominent researchers and leaders of large tech companies seem to be receiving an outsized amount of attention in social media and in the press. “Existential threats are often reported as fact,” he told VentureBeat. “This goes a long way to normalizing, through repetition, the belief that only scenarios that endanger civilization as a whole matter and that other harms are not happening or are not of consequence.” Yacine Jernite, an ML researcher at Hugging Face , pointed to a tweet by Timnit Gebru yesterday which likened the constant x-risk narrative to a DDoS attack — that is, when a cyber attacker floods a server with internet traffic to prevent users from accessing services and sites. Literally hundreds of X Center for X Future X {Humanity/Life…} X Center X AI Safety X Center for Existential Risks X X Center for Catastrophic Risks X All funded by white men #TESCREAL billionaires so worried about saving humanity bombarding us ? its like a DDOS attack. There is so much attention flooded onto x-risk, he said, that it “takes the air out of more pressing issues” and insidiously puts social pressure on researchers focused on other current risks and makes it hard to hold those focused on x-risk accountable. It also plays into issues of regulatory capture, he added, pointing to OpenAI’s recent actions as an example. “Some of these people have been pushing for an AI licensing regime, which has been rightfully attacked on grounds of pushing for regulatory capture,” he said. “The existential risk narrative plays into this by [companies saying] we’re the ones who should be making the rules for how [AI] is governed.” At the same time, OpenAI can say it will leave the EU if it is “overregulated,” he explained, alluding to last week’s threats from CEO Sam Altman. Drowning out voices seeking to draw attention to current harms Riedl admitted that the authors of the Statement on AI Risk acknowledge that one can be concerned about long-term, low-probability events and also be concerned about near-term, high-probability harms, But this overlooks the fact that the “doomer” narrative drowns out voices that seek to draw attention to real harms occurring to real people right now, he explained. “These voices are often from those in marginalized and underrepresented communities because they have experienced similar harms first-hand or second-hand,” he said. Also, outsized attention on one aspect of AI safety indirectly affects how resources are allocated. “Unlike worry, which is in infinite supply, other resources like research funding (and attention) are limited,” he said. “Not only are those who are most vocal about existential risk already some of the most well-resourced groups and individuals, but their influence can shape governments, industry, and philanthropy.” Cohere for AI’s Hooker agreed, saying that while it is good for some people in the field to work on long-term risks, the amount of those people is currently disproportionate to the ability to accurately estimate that risk. “My main concern is that it minimizes a lot of conversations around present day risk and in terms of allocation of resources,” she said. “I wish more of the attention was placed on the current risk of our models that are deployed every day and used by millions of people. Because for me, that’s what a lot of researchers work on day in, day out and it displaces visibility and resources for the efforts of many researchers who work on safety.” The bombastic views around existential risk may be “more sexy,” she added, but said it hurts researchers’ ability to deal with things like hallucinations , factual grounding, training models to update, making models serve other parts of the world, and access to compute: “So much of researchers’ frustration right now is about how do they audit, how do they participate in building these models?” ‘Baffled by the positions these prominent people are taking’ Thomas G. Dietterich, an ML pioneer and emeritus professor of computer science at Oregon State University, was blunt in his assessment of yesterday’s Statement on AI Risk. “I am baffled by the positions these prominent people are taking,” he told VentureBeat. “In the parts of AI outside of deep learning, most researchers think industry and the press are wildly over-reacting to the apparent fluency and breadth of knowledge of LLMs.” Dietterich said that in his opinion, the greatest risk that computers pose is through cyberattacks such as ransomware and advanced persistent threats designed to damage or take control of critical infrastructure. “As we figure out how to encode more knowledge in computer programs (as in ChatGPT and Stable Diffusion), these programs become more powerful tools for design, including the design of cyber attacks,” he said. Examining underlying financial incentives So why are industry leaders and prominent researchers raising the specter of AI as an existential risk? Dietterich noted that the organizations warning of existential risk, such as the Machine Intelligence Research Institute, the Future of Life Institute, the Center for AI Safety and the Future of Humanity Institute, obtain their funding precisely by convincing donors that AI existential risk is a real and present danger. “While I don’t question the sincerity of the people in these organizations, I think it is always worth examining the financial incentives at work,” he said. “By the same token, researchers like me receive our funding because we convince government funding agencies and companies that improving AI software will lead to benefits in furthering scientific research, advancing health care, making economies more efficient and productive and strengthening national defense. While the warnings about existential risk remain extremely vague, the research community has delivered concrete advances across science, industry and government.” Other prominent AI leaders are speaking out Many other prominent AI researchers are speaking out, on Twitter and elsewhere, against the ‘doomer’ narrative. For example, Andrew Ng insisted yesterday that AI will be a key part of the solution to existential risks: When I think of existential risks to large parts of humanity: * The next pandemic * Climate change→massive depopulation * Another asteroid AI will be a key part of our solution. So if you want humanity to survive & thrive the next 1000 years, lets make AI go faster, not slower. Meanwhile, AI researcher Meredith Whittaker, who was pushed out of Google in 2019 and is now president of the Signal Foundation, recently said that today’s x-risk alarmism from AI pioneers like Geoffrey Hinton is a distraction from more pressing threats. “It’s disappointing to see this autumn-years redemption tour from someone who didn’t really show up when people like Timnit [Gebru] and Meg [Mitchell] and others were taking real risks at a much earlier stage of their careers to try and stop some of the most dangerous impulses of the corporations that control the technologies we’re calling artificial intelligence,” she told Fast Company. How to handle the ‘doomer’ narrative For Riedl, there is room for concern for existential AI risks, although he emphasized that he has “personally yet to see claims or evidence that I find highly credible.” However, “if only the existential risk facet of AI safety receives attention and resources, then the ability to address current, ongoing harms will be negatively impacted,” he said. Hugging Face’s Jernite said that it is “tempting” to draft a counter-letter to the Statement on AI Risk. But he added that he won’t do that. “The statements have so many logical holes and we can spend so much of our time and energy trying to poke holes in each of those statements,” he said. “What I found both most useful and best for my mental health is to just keep on working on the things that matter,” he said. “You can’t [push back] every five minutes.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,190
2,023
"4 reasons our future is decidedly virtual | VentureBeat"
"https://venturebeat.com/virtual/4-reasons-our-future-is-decidedly-virtual"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 4 reasons our future is decidedly virtual Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In January, new reports on Apple’s long-awaited augmented reality/virtual reality headset were released. And if what’s in these reports is even partially true, Apple is poised to give the world one of the most jaw-dropping, powerful pieces of technology in history (again) — which is why it was a bit surprising that this news didn’t make more of a splash. This is the same company that has fans enter lotteries for tickets to corporate keynote addresses! Yet, outside of the usual tech blogs and a few newspaper columns, the future of Apple’s AR/VR device went largely unnoticed. Of course, Apple is not the first company in the virtual space to experience disappointment. Facebook stock has plummeted since it first announced its name change to Meta and made a commitment to the metaverse in October 2021. Sony just announced a drastic reduction in its anticipated launch numbers for its PlayStation VR2 headset, dropping its initial forecast by 50%. And the European Union was roasted on social media a few months ago after spending more than $400,000 to host an event in the metaverse that drew only a handful of thoroughly disappointed attendees. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! So, has the virtual bubble burst? Nonsense. Not only is virtual here to stay, it’s our future. Here are the four biggest reasons why, like it or not, humans will be living in a virtual-first world one day. 1. Tech The most common complaint about virtual reality is that it’s not the same as real life, which is true. But how long before it feels real? People forget that the first cell phones weighed 5 pounds, lost signal every few blocks and were so big they needed to be carried in briefcases. But today, we all carry phones that weigh a few ounces in our pockets, can connect seamlessly via video to almost any person on earth and boast more than 100,000 times the processing power of the computer that landed two men on the moon in 1969. Even with all the incredible achievements in AR/VR, we are still in the “cell phone in a briefcase” phase. The uncanny valley. But the technology will get there sooner than we think. How do I know? Because the businesses that get virtual right will be handsomely rewarded. Look at the success of 19 Crimes wine. The company took the basic paper wine label and turned it into a one-of-a-kind experience with the help of AR. Now, when consumers see a 19 Crimes bottle on the shelf, they are quick to pull out their phones and use the 19 Crimes app to bring the figure on the bottle to life. Are there wine brands that receive more awards and higher rankings than 19 Crimes? Certainly. But that’s not the point. Embracing virtual allowed the brand to take a typically mundane item (a paper wine label) and turn it into an experience, and in the process, went from 4 million bottles sold to 18 million in just 18 months. It’s the same principle used by The Wizarding World of Harry Potter at Universal Studios. The once “discount cousin” to Disney World, Universal Studios took everyday items such as cream soda and pieces of molded plastic and turned them into experiences. Now, that cream soda sells as a “Butterbeer” for $7.99, and that piece of plastic is a $55 wizard wand. Because they started selling an experience , not just a ticket for some rides, attendance to Universal Studios jumped 20% and revenue shot up more than 40% for the year. Younger generations are driving a consumer culture that’s much more focused on experiences than things, and they’re willing to pay a premium for them. As AR/VR tech improves, so will these experiences. Brands will no longer be limited by time, money and the physics of the natural world. They’ll only be limited by their imagination. 2. Gaming Did you know that 65% of American adults play video games? Or that the video game industry is five times larger than the film industry? How about the fact that nearly seven years after first going viral, the AR sensation “Pokémon Go” reported more than $1 billion in annual revenue in both 2020 and 2021, a 45% increase over what the game earned the year it made headlines across the globe? Video games are already the clearest use case for AR/VR. But in the future, these games won’t just be embraced at home. Once businesses realize the creativity and innovations they can unlock by immersing employees into virtual worlds, virtual gaming will become part of our work as well. While C-suite executives might look down their noses at gamers, they’d be shortsighted to do so. Video games encourage many of the behaviors that leaders want to see in their employees. Teamwork, communication, problem-solving, resilience to failure, innovation, creativity and more. Gamers are required to use these skills regularly while immersed in their virtual worlds. As soon as businesses see this, they’ll be quick to embrace virtual as the future of work. 3. Global warming From 2003 to 2019, the number of air passengers, and therefore volume of air travel, more than doubled. With this increase has come an increase in our impact on the planet. Scientists have estimated that a single passenger’s share of emissions on a flight from New York to Los Angeles is enough to melt 32 square feet of arctic summer ice, according to The New York Times. Our travel habits are causing serious harm to our planet. And virtual worlds and experiences are a major part of the solution. My sister recently attended the ABBA Voyage concert in London, where virtual avatars perform a 90-minute set, singing, dancing and traversing the stage in a way that is incredibly real. My sister even admitted to me that there was a long period of the show where she didn’t even realize it wasn’t the real ABBA. The virtual experience was that good. With mega reunion concerts all the rage these days, one forgets that a touring arena show requires dozens of semitrucks and countless international flights, making it a carbon emissions nightmare. But what if there were a future where fans could experience an amazing show locally (or even in their living rooms!) and the bands didn’t need to travel at all? While this may sound ridiculous, consider that the ABBA Voyage virtual concert has sold more than one billion tickets since it opened in May 2022. When you consider the speed at which this technology has improved, the future for virtual concerts and events (and their role in reducing carbon emissions) is quite bright. 4. Pandemics There was no greater glimpse at the future of virtual than during the COVID-19 pandemic. As the world braced itself against the virus, everything became virtual. Gyms were swapped with Peloton classes. Conference rooms for Zoom. Grocery stores for Instacart. And these changes weren’t just short-lived. Recent reports show major cities like New York losing more than $12 billion in annual revenue due to remote work. People are spending less on restaurants, bars, gyms, salons and retail stores, and more on goods and services that suit remote work lifestyles. I experienced this shift firsthand in my own business. Prior to the pandemic, 85% of my speaking engagements were at live, in-person events. But in 2021, I did 100% virtual events. In 2022 I snapped back to 70% live and 30% virtual. And now 2023 is shaping up to be a 50/50 split. I don’t believe I’ll ever go back to 100% in-person events. Although there was some attendee fatigue after two years of all-virtual events, event organizers are now realizing that the cost savings and convenience of virtual events are hard to beat. Plus, a steady mix of virtual events will help insulate the live event industry from future pandemics. Don’t think we’ll see another pandemic like COVID-19? Think again. In the past 40 years alone, we’ve had outbreaks of SARS, H1N1, MERS, Ebola and, of course, coronavirus, all just a few years apart. Sure, many of these outbreaks were regional, but humans have battled widespread disease since the beginning of time. And this battle won’t be going away anytime soon. So, there will always be a need for virtual events and gatherings. We’re all going virtual Stock prices and poor press aside, why did a company as successful as Facebook change its name to Meta and make a very risky, public commitment to the metaverse? Why is Apple spending years and investing countless dollars into building an AR/VR device for a category without a proven use case? Why is Disney filing metaverse-related patents and hiring a substantial number of employees to support its new metaverse strategy? It’s because they believe. And if some of the most innovative, successful companies on Earth believe the metaverse is the future, we should start believing, too. Sure, it might still take a few generations before humans live in a virtual-first world. But that world is coming. And we should prepare ourselves (and our businesses) for when it gets here. Duncan Wardle, former VP at The Walt Disney Company, runs iD8 & innov8. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,191
2,023
"Why security and resilience are essential for enterprise risk management | VentureBeat"
"https://venturebeat.com/security/why-security-and-resilience-are-essential-for-enterprise-risk-management"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why security and resilience are essential for enterprise risk management Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Security threats have been making headlines for years. In 2020, the SolarWinds Attack was seen (at the time) as one of the most sophisticated and widespread cyberattacks conducted against the federal government and private sector, breaching thousands of organizations globally and propelling supply chain attacks to the front of security conversations. It seems that malicious actors are challenging governments and cyber defenses across all industries by targeting their ecosystem of IT partners. I believe the stakes are especially high for those in highly regulated industries, which can be exploited through their digital supply chain, giving hackers access to consumers’ valuable and sensitive data. Increasing cloud use: Increased risk However, the risks don’t stop there. Cyber resilience, and the broader considerations linked to operational resilience, are at the forefront of IT decisions, as banks and other financial institutions are becoming increasingly reliant on cloud. The U.S. government is taking note, releasing its evaluation on the consequences of cloud concentration as it can put financial stability at risk. Furthermore, the Biden administration’s national cybersecurity strategy can also be seen as a step to advance standards of security and compliance at different levels of engagement. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While we must be prepared to protect and respond to malicious attacks, that is only one part of building a resilient organization. Some enterprises may fail to consider the risks to the business that can come with a lack of resiliency. Technical vulnerabilities such as an outage from a cloud provider can potentially negatively impact the integrity of cloud services — and moreover, disrupt business operations for customers. That is, if all workloads reside with a single cloud provider. This is why a hybrid multicloud approach can be crucial to keeping the lights on for enterprises to continue operations while dealing with a crisis. Growing scrutiny from regulators The White House isn’t the only government entity taking note. The recent report on cloud adoption from the U.S. Department of the Treasury issued concern about the potential impact of cloud services-based technology concentration on the financial sector. The report is a stepping stone in rolling out future recommendations in driving risk management. However, we should all consider this a strong signal of what’s to come — an industry effort to deal with regulations to reign in cloud concentration and supply chain dependence risk. But as enterprises navigate these growing regulations, they must remember there is one important factor that isn’t in question: The benefits of the cloud. In fact, cloud can be a force multiplier in security, enabling enterprises to improve their resiliency and reduce risk — when leveraged efficiently. Those operating in financial services need agile technology platforms that can help them rapidly modernize in response to evolving demands of their digital-first consumer — which include quickly securing loan approval in minutes to calculating the carbon footprint of their purchases. These daily activities require banks, FinTechs and other financial institutions to collect, store and manage their customers’ most confidential data. Cloud provides a tremendous opportunity to safeguard this data as the financial services industry breaks ground with innovation to expand financial inclusion and manage the financial well-being of our communities. However, we also recognize there’s a lot at stake here — customer trust and the confidence of regulators. I strongly believe financial institutions and their ecosystem of cloud partners need to solve cloud complexities together to mitigate potential resiliency threats. This means getting people, processes and technology to work in unison to manage complexities by design from the first stages of crafting an IT strategy through to execution. Remember cloud is not a destination; it’s an enabler We understand that regulators will always be challenged by the responsibility they have to evolve policies to build and sustain trust in the digital transformation journey. However, we all need to understand that the answer may not be sole reliance on a single cloud provider. It’s about understanding the uniqueness of your business processes and applications to develop a comprehensive workload placement strategy. The hybrid multicloud conversation should be focused on making intentional choices about where data and workloads are hosted and where workloads are deployed. These decisions should be made based on five parameters: resiliency, performance, security, compliance and total cost of ownership. The reality is that workloads may need to operate in different environments to function successfully. However, if it’s not done correctly, there could be unnecessarily accentuated risks. Mixing on-premises systems with an array of cloud environments can lead financial institutions to levels of operational complexity that can overwhelm IT teams. It is vital for FinTechs to appropriately plan from the outset to pick the appropriate deployment locations to manage data securely to mitigate risks. The fact is, there is no one-size-fits-all approach for industries that vastly have different wants and needs from an IT perspective. This is why it’s crucial for financial institutions to understand that cloud is not a destination — it’s an enabler. Thwarting cyber risks with cyber resiliency Recovering from a cyberattack within a hybrid multicloud environment can be challenging, with an assortment of workloads, infrastructure and equipment spread across multiple environments. This can be made worse by implementing security strategies in silos, paving the path for the dreaded “ Frankencloud ” environment that allows cyber predators to find their way into the organization. I believe cyber resiliency strategies should be designed with one single point of control, allowing financial institutions to gain a holistic view of their environment, as well as potential threats. This is where partnership execution is vital, with cloud providers co-creating and consolidating both a security and resiliency strategy across hybrid, multicloud environments. We need to ensure that cybersecurity is a top priority as enterprises continue to innovate and regulatory scrutiny continues to grow. I strongly believe hybrid, multicloud strategies are a pivotal step in the right direction to advance operational resiliency. However, the cloud community needs to build trust among financial institutions, regulators, and the government — it takes all of us. Howard Boville is SVP and head of IBM cloud platform. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,192
2,023
"US senator open letter calls for AI security at ‘forefront’ of development | VentureBeat"
"https://venturebeat.com/security/us-senator-open-letter-calls-for-ai-security-at-forefront-of-development"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages US senator open letter calls for AI security at ‘forefront’ of development Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, Sen. Mark Warner (D-VA), chairman of the Senate Intelligence Committee, sent a series of open letters to the CEOs of AI companies, including OpenAI , Google, Meta, Microsoft and Anthropic , calling on them to put security at the “forefront” of AI development. “I write today regarding the need to prioritize security in the design and development of artificial intelligence (AI) systems. As companies like yours make rapid advancements in AI, we must acknowledge the security risks inherent in this technology and ensure AI development and adoption proceeds in a responsible and secure way,” Warner wrote in each letter. More broadly, the open letters articulate legislators’ growing concerns over the security risks introduced by generative AI. Security in focus This comes just weeks after NSA cybersecurity director Rob Joyce warned that ChatGPT will make hackers that use AI “much more effective,” and just over a month after the U.S. Chamber of Commerce called for regulation of AI technology to mitigate the “national security implications” of these solutions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The top AI-specific issues Warner cited in the letter were integrity of the data supply chain (ensuring the origin, quality and accuracy of input data), tampering with training data (aka data-poisoning attacks), and adversarial examples (where users enter inputs to models that intentionally cause them to make mistakes). Warner also called for AI companies to increase transparency over the security controls implemented within their environments, requesting an overview of how each organization approaches security, how systems are monitored and audited, and what security standards they’re adhering to, such as NIST’s AI risk management framework. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,193
2,023
"Tenable report shows how generative AI is changing security research  | VentureBeat"
"https://venturebeat.com/security/tenable-report-shows-how-generative-ai-is-changing-security-research"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Tenable report shows how generative AI is changing security research Share on Facebook Share on X Share on LinkedIn Programmer looking at code on a screen Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, vulnerability management provider Tenable published a new report demonstrating how its research team is experimenting with large language models (LLMs) and generative AI to enhance security research. The research focuses on four new tools designed to help human researchers streamline reverse engineering, vulnerability analysis, code debugging and web application security, and identify cloud-based misconfigurations. These tools, now available on GitHub , demonstrate that generative AI tools like ChatGPT have a valuable role to play in defensive use cases, particularly when it comes to analyzing code and translating it into human-readable explanations so that defenders can better understand how the code works and its potential vulnerabilities. “Tenable has already used LLMs to build new tools that are speeding out processes and helping us identify vulnerabilities faster and more efficiently,” the report said. “While these tools are far from replacing security engineers, they can act as a force multiplier and reduce some labor-intensive and complex work when used by experienced researchers.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Automating reverse engineering with G-3PO One of the key tools outlined in the research is G- 3PO , a translation script for the reverse engineering framework Ghidra. Developed by the NSA , G-3PO is a tool that disassembles code and decompiles it into “something resembling source code” in the C programming language. Traditionally, a human analyst would need to analyze this against the original assembly listing to ascertain how a piece of code functions. G-3PO automates the process by sending Ghidra’s decompiled C code to an LLM (supporting models from OpenAI and Anthropic) and requests an explanation for what the function does. As a result the researcher can understand the code’s function without having to analyze it manually. While this can save time, in a YouTube video explaining how G-3PO works, Olivia Fraser, Tenable’s zero-day researcher, warns that researchers should always double-check the output for accuracy. “It goes without saying of course that the output of G-3PO, just like any automated tool, should be taken with a grain of salt and in the case of this tool, probably with several tablespoons of salt,” Fraser said. “Its output should of course always be checked against the decompiled code and against the disassembly, but this is par for the course for the reverse engineer.” BurpGPT: The web app security AI assistant Another promising solution is BurpGPT , an extension for application testing software Burp Suite that enables users to use GPT to analyze HTTP requests and responses. BurpGPT intercepts HTTP traffic and forwards it to the OpenAI API, at which point the traffic is analyzed to identify risks and potential fixes. In the report, Tenable noted that BurpGPT has proved successful at identifying cross site scripting (XSS) vulnerabilities and misconfigured HTTP headers. This tool therefore demonstrates how LLMs can play a role in reducing manual testing for web application developers, and can be used to partially automate the vulnerability discovery process. “EscalateGPT appears to be a very promising tool. IAM policies often represent a tangled complex web of privilege assignments. Oversights during policy creation and maintenance often creep in, creating unintentional vulnerabilities that criminals exploit to their advantage. Past breaches against cloud-based data and applications proves this point over and over again,” said Avivah Litan, VP analyst at Gartner in an email to VentureBeat. EscalateGPT: Identify IAM policy issues with AI In an attempt to identify IAM policy misconfigurations, Tenable’s research team developed EscalateGPT , a Python tool designed to identify privilege-escalation opportunities in Amazon Web Services IAM. Essentially, EscalateGPT collects the IAM policies associated with individual users or groups and submits them to the OpenAI API to be processed, asking the LLM to identify potential privilege escalation opportunities and mitigations. Once this is done, EscalateGPT shares an output detailing the path of privilege escalation and the Amazon Resource Name (ARN) of the policy that could be exploited, and recommends mitigation strategies to fix the vulnerabilities. More broadly, this use case illustrates how LLMs like GPT-4 can be used to identify misconfigurations in cloud-based environments. For instance, the report notes GPT-4 successfully identified complex scenarios of privilege escalation based on non-trivial policies through multi-IAM accounts. When taken together, these use cases highlight that LLMs and generative AI can act as a force multiplier for security teams to identify vulnerabilities and process code, but that their output still needs to be checked manually to ensure reliability. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,194
2,023
"Q1 marked lowest VC funding for security in a decade, but there’s a silver lining  | VentureBeat"
"https://venturebeat.com/security/q1-marked-lowest-vc-funding-for-security-in-a-decade-but-theres-a-silver-lining"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Q1 marked lowest VC funding for security in a decade, but there’s a silver lining Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, DataTribe released a new report showing venture capital activity in the cybersecurity industry dropped significantly in Q1 2023. The report showed that although the cybersecurity industry experienced a less dramatic decline than the wider U.S. VC ecosystem, cybersecurity deal activity in Q1 was at or near decade lows, with an average seed deal volume of 21 in Q1 2023, compared to 20 in Q1 2015. Likewise, year-over-year cybersecurity seed deal volume was down 56%, from 48 deals to 21. Although, the report also noted that the seed-stage cybersecurity market remained “relatively bright,” with a median premoney valuation of $15.5 million, just behind the all-time high of $15.8M in Q4 2022. The bright side to lower VC funding While the overall decline in VC seed funding appears to be a major blow for the cybersecurity sector, the report argues that there’s an underlying silver lining: consolidation among solution providers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Fewer companies receiving more funding at higher valuations is likely a good thing for the sector, particularly the enterprise CISO, [who] is already overwhelmed with vendors trying to sell the latest product,” the report said. In an email interview with VentureBeat, John Funge, managing director at DataTribe, reaffirmed the report’s finding and argued that “while the slowdown is painful in some cases, we see it as an overall healthy thing.” Funge suggested that larger cybersecurity companies will be able to take advantage of the market environment to make acquisitions and consolidate solutions while weaker companies struggle to survive. “The medium- to long-term benefit of this will be some rationalization of the highly-fragmented tech stacks that enterprises depend on,” Funge said. One company that appears to illustrate this approach is cloud security provider Wiz , which despite the economic slowdown, managed to raise a $300M series D funding round and a $10 billion premoney valuation for a solution that consolidates cloud security posture management (CSPM) and cloud-native application protection platform (CNAPP) capabilities into a single solution. If Funge and DataTribe are correct that an economic slowdown will encourage rationalization in the industry, then this will likely be a net-positive for CISOs. They’ll have an opportunity to reduce complexity throughout their tech stack and decrease the overall number of tools needed to secure their organizations’ environments. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,195
2,023
"How post-quantum cryptography will help fulfill the vision of zero trust | VentureBeat"
"https://venturebeat.com/security/how-post-quantum-cryptography-will-help-fulfill-the-vision-of-zero-trust"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How post-quantum cryptography will help fulfill the vision of zero trust Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Lost in the debate over if, or when, a quantum computer will decipher encryption models is the need for post-quantum cryptography (PQC) to become part of organizations’ tech stacks and zero-trust strategies. Enterprises need to follow the lead Cloudflare has taken and design PQC as a core part of their infrastructure, with the goal of extending zero trust beyond endpoints. At this week’s RSAC 2023 event, VentureBeat delved into the current state of PQC and learned how urgent the threat of quantum computing is to encryption and national security. Four sessions covered cryptography at the RSAC this year. The one that provided the most valuable insights was the Cryptographer’s Panel hosted by Dr. Whitfield Diffie, ForMemRS, Gonville and Caius College, Cambridge, with panelists Clifford Cocks, independent consultant; Anne Dames, IBM Infrastructure; Radia Perlman, Dell Technologies; and Adi Shamir, the Weizmann Institute, Israel. Dr. Shamir is a noted authority on cryptography, having contributed research and theory in the area for decades. Dr. Shami says that he doesn’t believe quantum computing to be an immediate threat, but RSA or elliptic curve cryptography could become vulnerable to decryption in the future. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Anne Dames of IBM warned that enterprises need to start thinking about which of their systems are most threatened by potential rapid advances in quantum computing. She advised the audience that public key cryptography systems are the most vulnerable ones. “Today, companies are facing AI- and machine learning-assisted crypto-attacks and other cryptographic threats that find vulnerabilities in software and hardware implementations,” writes Lisa O’Connor, managing director, Accenture Security, cybersecurity R&D, Accenture Labs. “If this weren’t worrisome enough, we’re one year closer to the breaking point of our 40-year-old cryptographic schema, which could bring business as we know it to a screeching halt. Quantum computing will break these cryptographic fundamentals.” Harvest-now, decrypt-later attacks increasing The consensus of industry researchers, including members of government advisory committees interviewed at RSAC, predicts exponential growth in bad actors and advanced persistent threat (APT) groups that are funded by nation-states. They aim to crack encryption well ahead of the most optimistic estimates. Last year the Cloud Security Alliance launched a countdown to Y2Q (years to quantum) that predicts just under seven years until quantum computing will be able to crack current encryption. CISOs, CIOs and their teams must commit to continual learning about post-quantum cryptography and its implications on their tech stacks in order to block ”harvest-now, decrypt-later” attacks that are growing globally. “That’s an area [where] I feel like the market needs to be thinking about much more, and that’s where we’ve spent a fair amount of our resources, as well as what do you do today [as an organization to prepare]. So that when quantum does hit, you’re not compromised at that point in time,” Jeetu Patel, EVP & GM of security and collaboration business units at Cisco, told VentureBeat at RSAC this week. Patel compared the deciphering of encryption to Y2K: “The difference between quantum and Y2K is on day one of Y2K, things flipped over.” All the work done on Y2K “was based on day one. Whereas … let’s say it takes 10 years to get [PCQ] to where it needs to be. Well, the bad actors have 10 years’ worth of data, and [they] can unencrypt all of that … after the fact.” Veetu agreed that nation-states too are continuing to invest in quantum computing to crack encryption, shifting the balance of power in the process. Cybersecurity and AI leaders serving on government task forces tell VentureBeat that threats to cryptographic systems and the authentication technologies protecting them are considered high-priority for national security. Initiatives to counter the threat are being fast-tracked. The memorandum issued by the Executive Office of the President on May 4, 2022, “National Security Memorandum on Promoting United States Leadership in Quantum Computing While Mitigating Risks to Vulnerable Cryptographic Systems,” is a start. Secretary of Homeland Security Alejandro N. Mayorkas had outlined his cybersecurity resilience vision in a speech on March 31, 2021. NIST will release a post-quantum cryptographic standard in 2024. Hacked encryptions’ first victim will be everyone’s identities PQC shows potential for strengthening the areas of zero trust network access (ZTNA) where attackers are always searching for weaknesses. Identity and access management (IAM) , multifactor authentication (MFA) , microsegmentation and data security are some of the areas where PQC can strengthen any organization’s zero-trust framework. CISOs tell VentureBeat that despite current economic headwinds, their best chance of getting funded is to build a business case for technologies that deliver measurable gains in protecting revenue and reducing risk. It’s a bonus if the technology investment further strengthens their zero-trust security posture. PQC is now part of the conversation, driven to board-level awareness by NATO and the White House recognizing post-quantum threats and preparing for Y2Q. Gartner predicts that by 2025, post-quantum cryptography risk assessment will be the top security issue that businesses will look for advice on. The advisory firm cautions startups to concentrate on clearly communicating the business value and advantage their PQC systems deliver, or they risk running out of funding. “By 2027, 50% of the startups in the quantum computing space will go out of business because they focused on quantum advantage/supremacy over business advantage for clients,” writes Gartner in its research note, Emerging Tech: How to Make Money From Quantum Computing (client access required) published February 24 of this year. “Trust is the factor that unifies zero trust architecture (ZTA) and PQC, writes Jen Sovada, president, public sector, SandboxAQ, in her recent article Bridging Post-Quantum Cryptography and Zero Trust Architecture. “Implementation of both will require trusted identity, access and encryption that wrap around next-generation cybersecurity architectures using continuous monitoring. Cryptography — and more importantly, cryptographic agility enabled by PQC — offers a foundation for ZTA in a post-quantum world.” PQC technologies’ potential for protecting identities is already showing, and that’s reason enough for CIOs and CISOs to track these technologies. While no one knows when a quantum computer will crack encryption algorithms, well-financed cybercriminal gangs and advanced persistent threat (APT) groups funded by nation-states have made it known they are all-in on attacking encryption algorithms before the world’s organizations, large-scale enterprises and governments can react. The urgency to get PQC in place is warranted because hacked encryptions would be devastating. How and where post-quantum cryptography will benefit zero trust Planning now to strengthen zero-trust frameworks with PQC will help to close the security gaps in legacy approaches to cryptography. Closing these gaps is core to a future of identity-based security scaling beyond endpoints and the machine identities proliferating across networks. PQC’s quantum-resistant algorithms will further harden the encryption technologies that zero trust’s reliability, stability and scale rely on. Closing these gaps also strengthens confidentiality, integrity and authentication. PQC secures data in transit and at rest, further strengthening zero trust. By enabling secure communication among organizations and systems, PQC will help build a zero-trust digital ecosystem. Interoperability ensures secure connections with partners, suppliers and customers even as technology changes. Key areas where PQC will harden zero trust include identity and access management (IAM), privileged access management (PAM), microsegmentation, multifactor authentication (MFA), protecting log data and communications encryption, and data security, including protecting data at rest. The following table provides an overview of where PQC can contribute most by core areas of zero trust. Conclusion Industry leaders advising the government on the risks of quantum computing tell VentureBeat that over 50 nations are today investing in the technologies needed to break authentication and encryption algorithms. Harvest-now, decrypt-later attacks are motivated by everything from financial gain (for example, on the part of the North Korean government) to government and industrial espionage, where new technologies under development are targeted. CISOs and CIOs need to stay current on quantum computing threats and consider how they can capitalize on the momentum of zero trust to further harden their infrastructure with PQC technologies in the future. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,196
2,023
"How deepfakes 'hack the humans' (and corporate networks) | VentureBeat"
"https://venturebeat.com/security/how-deepfakes-hack-the-humans-and-corporate-networks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How deepfakes ‘hack the humans’ (and corporate networks) Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Once crude and expensive, deepfakes are now a rapidly rising cybersecurity threat. A UK-based firm lost $243,000 thanks to a deepfake that replicated a CEO’s voice so accurately that the person on the other end authorized a fraudulent wire transfer. A similar “deep voice” attack that precisely mimicked a company director’s distinct accent cost another company $35 million. Maybe even more frightening, the CCO of crypto company Binance reported that a “sophisticated hacking team” used video from his past TV appearances to create a believable AI hologram that tricked people into joining meetings. “Other than the 15 pounds that I gained during COVID being noticeably absent, this deepfake was refined enough to fool several highly intelligent crypto community members,” he wrote. Cheaper, sneakier and more dangerous Don’t be fooled into taking deepfakes lightly. Accenture’s Cyber Threat Intelligence ( ACTI ) team notes that while recent deepfakes can be laughably crude, the trend in the technology is toward more sophistication with less cost. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In fact, the ACTI team believes that high-quality deepfakes seeking to mimic specific individuals in organizations are already more common than reported. In one recent example , the use of deepfake technologies from a legitimate company was used to create fraudulent news anchors to spread Chinese disinformation showcasing that the malicious use is here, impacting entities already. A natural evolution The ACTI team believes that deepfake attacks are the logical continuation of social engineering. In fact, they should be considered together, of a piece, because the primary malicious potential of deepfakes is to integrate into other social engineering ploys. This can make it even more difficult for victims to negate an already cumbersome threat landscape. ACTI has tracked significant evolutionary changes in deepfakes in the last two years. For example, between January 1 and December 31, 2021, underground chatter related to sales and purchases of deepfaked goods and services focused extensively on common fraud, cryptocurrency fraud (such as pump and dump schemes) or gaining access to crypto accounts. A lively market for deepfake fraud However, the trend from January 1 to November 25, 2022 shows a different, and arguably more dangerous, focus on utilizing deepfakes to gain access to corporate networks. In fact, underground forum discussions on this mode of attack more than doubled (from 5% to 11%), with the intent to use deepfakes to bypass security measures quintupling (from 3% to 15%). This shows that deepfakes are changing from crude crypto schemes to sophisticated ways to gain access to corporate networks — bypassing security measures and accelerating or augmenting existing techniques used by a myriad of threat actors. The ACTI team believes that the changing nature and use of deepfakes are partially driven by improvements in technology, such as AI. The hardware, software and data required to create convincing deepfakes is becoming more widespread, easier to use, and cheaper, with some professional services now charging less than $40 a month to license their platform. Emerging deepfake trends The rise of deepfakes is amplified by three adjacent trends. First, the cybercriminal underground has become highly professionalized, with specialists offering high-quality tools, methods, services and exploits. The ACTI team believes this likely means that skilled cybercrime threat actors will seek to capitalize by offering an increased breadth and scope of underground deepfake services. Second, due to double-extortion techniques utilized by many ransomware groups, there is an endless supply of stolen, sensitive data available on underground forums. This enables deepfake criminals to make their work much more accurate, believable and difficult to detect. This sensitive corporate data is increasingly i ndexed , making it easier to find and use. Third, dark web cybercriminal groups also have larger budgets now. The ACTI team regularly sees cyber threat actors with R&D and outreach budgets ranging from $100,000 to $1 million and as high as $10 million. This allows them to experiment and invest in services and tools that can augment their social engineering capabilities, including active cookies sessions, high-fidelity deepfakes and specialized AI services such as vocal deepfakes. Help is on the way To mitigate the risk of deepfake and other online deceptions, follow the SIFT approach detailed in the FBI’s March 2021 alert. SIFT stands for Stop, Investigate the source, Find trusted coverage and Trace the original content. This can include studying the issue to avoid hasty emotional reactions, resisting the urge to repost questionable material and watching for the telltale signs of deepfakes. It can also help to consider the motives and reliability of the people posting the information. If a call or email purportedly from a boss or friend seems strange, do not answer. Call the person directly to verify. As always, check “from” email addresses for spoofing and seek multiple, independent and trustworthy information sources. In addition, online tools can help you determine whether images are being reused for sinister purposes or whether several legitimate images are being used to create fakes. The ACTI team also suggests incorporating deepfake and phishing training — ideally for all employees — and developing standard operating procedures for employees to follow if they suspect an internal or external message is a deepfake and monitoring the internet for potential harmful deepfakes (via automated searches and alerts). It can also help to plan crisis communications in advance of victimization. This can include pre-drafting responses for press releases, vendors, authorities and clients and providing links to authentic information. An escalating battle Presently, we’re witnessing a silent battle between automated deepfake detectors and the emerging deepfake technology. The irony is that the technology being used to automate deepfake detection will likely be used to improve the next generation of deepfakes. To stay ahead, organizations should consider avoiding the temptation to relegate security to ‘afterthought’ status. Rushed security measures or a failure to understand how deepfake technology can be abused can lead to breaches and resulting financial loss, damaged reputation and regulatory action. Bottom line, organizations should focus heavily on combatting this new threat and training employees to be vigilant. Thomas Willkan is a cyber threat intelligence analyst at Accenture. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,197
2,023
"Announcements at RSAC 2023 show alliances, AI defining the future of cybersecurity | VentureBeat"
"https://venturebeat.com/security/announcements-at-rsac-2023-show-alliances-ai-defining-the-future-of-cybersecurity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Announcements at RSAC 2023 show alliances, AI defining the future of cybersecurity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. CISOs want more efficacy, real-time data visibility and a unified view of endpoints, identities and assets across their networks. They’re also looking for pricing help from vendors to stay within budget. Any new announcement at RSAC 2023 needed to be benchmarked against those two goals. RSAC proves selling consolidation is a team sport The conference’s theme, “Stronger Together,” was appropriate given the dozens of new alliances and partnerships being launched. With CISOs pushing their vendors to provide more consolidation of their tech stacks and spending, as well as increased efficacy, leading vendors, including CrowdStrike, Delinea, Google, Mandiant, Accenture and Palo Alto Networks, responded: More alliances and partnerships were mentioned at RSAC 2023 than at any previous edition of the conference. The work of Accenture and Palo Alto Networks reflects the value that alliances will have to deliver to earn long-term engagements. The two companies are collaborating to deliver joint secure access service edge (SASE) solutions powered by Palo Alto Networks’ AI-powered Prisma SASE, enabling organizations to improve their cyber-resilience and accelerate business transformation. “Organizations are seeking to reduce the risk of managing their increasingly complex IT environments — in which new technology is layered on top of the legacy infrastructure — while ensuring business resilience,” said Rex Thexton, who leads Accenture’s cybersecurity protection business. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! It was evident which vendors had most quickly identified consolidation as a business opportunity, and which ones are just starting to see the need to create shared systems with solid APIs to address CISOs’ needs. CrowdStrike’s consolidation strategy anchored with XDR , a platform that can deliver greater threat intelligence with AI , was one of the first to take a product-based approach to the opportunity. Palo Alto Networks had taken an all-in approach to consolidation last year at its Ignite ’22 conference. CrowdStrike followed with partnerships, announced at RSAC 2023, with Google Workspace , CrowdStream (powered by Cribl) and the announcement of the industry’s first native XDR offering for ChromeOS. Benchmarking alliances by their platform support An excellent way to benchmark the many new partnerships is to see which ones can share telemetry data and provide a unified view of an enterprise’s network and endpoints. That is what CISOs want. Absolute Software’s Application Persistence -as-a-Service Ecosystem (APaaS) reflects how an alliance program supported by a scalable platform can help CISOs gain efficacy, real-time data visibility and a unified view of endpoints, identities and assets across networks. Absolute took an innovative approach to designing its APaaS platform, so its ISV partners could capitalize on its expertise with its Absolute Persistence technology. Absolute’s technology is embedded in over 600 million PCs’ firmware, making it the only self-healing endpoint platform that provides an undeletable digital tether to every device and endpoint to help ensure resiliency. By taking a platform-centric approach to their APaaS initiatives, ISV partners can gain the advantages of application resilience and measure every endpoint’s health and integrity. ISVs integrate the Absolute APaaS SDK into their installer, which allows them to enroll and activate Absolute Persistence and enable their apps for application resilience and self-healing on behalf of their end customers. Absolute’s APaaS won an award from Cyber Defense Magazine (CDM) at RSAC this year in the Next Gen Cyber Resilience Solution category. AI is the new DNA of cybersecurity Cyberattackers routinely use ChatGPT to personalize phishing messages, create ransomware code, fine-tune malware -less attack strategies and automate how they search for open ports in target organizations. Moving faster than the most efficient cybersecurity and security operations center (SOC) teams and technologies, cyberattackers reinvent attack strategies in minutes, relocating attacks from one continent to another to avoid detection. Every breach attempt is designed to capitalize on human weaknesses, whether through social engineering or overwhelming complexity, speed and scale. Taking on the challenge of containing a breach requires machine learning and AI. Of the many excellent keynotes given at RSAC, Vasu Jakkal, Microsoft CVP, security, compliance, identity and privacy, and Jeetu Patel, EVP and GM of security and collaboration business units at Cisco, gave two of the most memorable. Both speakers articulated a vision of AI that makes it clear it’s the new DNA of cybersecurity. Each mentioned how critical it is to attain machine scale and speed to counter attacks. “We have to remember who we are up against as we think about why we need AI,” Vasu explained during her insightful and interesting keynote, titled Defending at Machine Speed: Technology’s New Frontier. “Today the threat landscape is challenging. We’ve gone from 567 attacks per second to 1,287 attacks per second. That translates to tens of billions of attacks. Cybersecurity is very complex. The average defender is dealing with more than 70 tools at any given time, and it takes a long time for us to investigate all of this work and to be strategic so that the AI will be a game changer.” “The ability to discern between a real threat and legitimate activity is going to get harder and harder and harder to do,” Cisco’s Patel told VentureBeat at RSAC this week. “And so, given that you don’t know what’s a legitimate activity, you don’t know what regular activity you might be conducting. What you end up having is this dilemma: If you cannot deal with these attacks and the increased sophistication of attacks at human scale anymore, you have to deal with a machine scale. “To deal with it on a machine scale,” he continued, “you need to have data and telemetry that can’t be isolated — there has to be correlation across domains. So this notion of [a] cross-domain native boundary is really important. Because that feeds an AI model that can help you better detect anomalies; that can then make sure that you do the right things to not only detect the breaches faster but also respond to them as fast as possible.” Patel’s keynote presentation, Threat Response Needs New Thinking. Don’t Ignore This Key Resource , is worth watching. Integrated AI is table stakes The events at RSAC also showed which cybersecurity vendors are taking a systematic, platform-based approach to augmenting existing AI systems with more adaptive models. CISOs want real-time data visibility and a unified view of endpoints, identities and assets across their networks, supported with AI-based insights. VentureBeat spoke with several CEOs at RSAC to learn how each perceives the value of AI in their product strategies today and in the future. Connie Stack, CEO of NextDLP , told VentureBeat, “AI and machine learning can significantly enhance data loss prevention by adding intelligence and automation to detecting and preventing data loss. AI and machine learning algorithms can analyze patterns in data and detect anomalies that may indicate a security breach or unauthorized access to sensitive information well before any policy violation occurs.” Stack also mentioned that NextDLP is looking at how “AI and machine learning can also be used to predict potential security threats based on patterns and historical data. This can help security teams take proactive measures to prevent data loss or leakage. Our customers and prospects are excited about the potential of AI and ML applied to their DLP use cases. They see great potential in reducing manual efforts around detecting data loss so they can reallocate precious security resources to other tasks.” Most CEOs and CISOs have insider threats higher on their priority list than they did last year. The reason: While many companies have not announced layoffs, employees are made anxious by frequent news reports of tech leaders letting thousands of workers go. VentureBeat asked Stack how AI can be used to reduce or even eliminate insider threats on the NextDLP platform. She told VentureBeat, “AI and machine learning integrated into the Reveal Platform from Next and our endpoint agent reduce or even eliminate insider threat via real-time user monitoring. The AI and ML algorithms monitor user behavior and enable organizations to detect and respond to potential data-loss incidents immediately. The behavioral analytics rapidly detect abnormal patterns, such as accessing sensitive data outside of normal working hours or downloading large amounts of data to an external device, and flag them for analyst follow-up without even having triggered a policy violation.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,198
2,023
"3CX data breach shows organizations can’t afford to overlook software supply chain attacks   | VentureBeat"
"https://venturebeat.com/security/3cx-data-breach-shows-organizations-cant-afford-to-overlook-software-supply-chain-attacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 3CX data breach shows organizations can’t afford to overlook software supply chain attacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Last month, VoIP provider 3CX experienced a data breach after an employee downloaded a trojanized version of Trading Technologies ’ X_Trader software. After breaking into the vendor’s environment, North Korean threat actors then used an exploit to ship malicious versions of the 3CX desktop app to downstream customers as part of a software supply chain attack. The incident resulted in the compromise of two critical infrastructure organizations and two financial trading entities. It’s one of the first known instances where a threat actor chained together two supply chain attacks in one. More importantly, this high-profile breach highlights the havoc that third-party compromise can wreak on an organization, and shows that organizations need to focus on mitigating upstream risk if they want to avoid similar incidents in future. After all, when considering that supply chain attacks increased by 633% over the past year, with 88,000 known instances, security leaders can’t afford to assume that these attacks are rare or infrequent. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Understanding the risk of software supply chain attacks Ever since a Russian cybergang orchestrated the supply chain breach in December 2020 to gain access to SolarWinds internal systems and shipped malicious updates to customers, as well as opening up access to as many 18,000 SolarWinds customers, these styles of attacks have remained a persistent threat to organizations. One of the main reasons for this is that they’re cost effective. For financially and espionage-motivated cybercriminals, supply chain attacks are a go-to choice because an organization can hack a single software vendor and gain access to multiple downstream organizations to maximize their reach. At the same time, the ability of an intruder to situate themselves in-between the vendor and customer’s relationship, puts them in a position to move laterally between multiple organizations at a time to gain access to as much data as possible. “Supply chain attacks are very difficult to pull off, but highly cost effective if they succeed, since they open a very wide attack surface, usually known and available exclusively to the attacker. This creates a ‘hunting ground,’ or even a sort of ‘buffet’ in which the threat actor has their choice of target organizations and can operate with fewer constraints,” said Amitai Cohen, attack vector–intel lead at Wiz , in an email to VentureBeat. “For end-user software, the threat actor can gain initial access to every workstation or server in the target organization’s network on which the app is installed,” Cohen said. What makes the 3CX breach stand out According to Mandiant consulting, the team that discovered the initial compromise vector of the breach, the incident was notable not just because of the linked software supply chain attacks, but because it highlighted that the North Korean threat actor, referred to as UNC4736, has developed the ability to launch these attacks. “These types of breaches have been happening for a long time. This one was notable because it was the first time we had seen these things kind of daisy-chained together, where one sort of led to another,” said Ben Read, Mandiant director of cyber-espionage analysis, in an interview with VentureBeat. He added the attack also alerted security experts that “North Korea has the technical ability to carry these things off.” Another concerning element of this incident is the fact that the breach remained undiscovered for a significant period of time, leading to concerns that there could be other unknown organizations affected. “And the other part is that the Trading Technologies [breach] occurred back in the spring of 2022 and as far as we’re aware, the specifics of it hadn’t come to light before now. So there’s a possibility that this has happened in other places and no one has found it yet,” Read said. More to come from UNC4736 At this stage, it’s too early to say whether the success of this breach will inspire other threat actors to launch similar attacks. However, Symantec principal intelligence analyst Dick O’Brien, who has been closely monitoring the incident, believes that the UNC4736 group behind the attack are likely to conduct similar attacks in future. “We’re seeing a North Korean sponsored actor getting its foothold into multiple organizations in multiple geographies. And while the motivation right now seems to be probably financial; with North Korea, you can never really rule out anything else occurring,” O’Brien said. “I wouldn’t be surprised at all if we see another supply chain attack from this group,” O’Brien said. “I think that the reach this group has gotten through the supply chain attacks is a cause for concern.” As a result, organizations need to be hardening their internal network controls to prevent such actors from moving laterally from system to system, as part of what Read calls an “assume compromise” approach. In practice, this means incorporating network segmentation , which is dividing a network into smaller parts and implementing zero trust access controls to limit privileged access to resources. That way, if an attacker does gain access to the environment, their mobility is limited, making the incident easier to contain. How organizations can mitigate third-party risk While internal controls like network segmentation and zero-trust access controls go some way to mitigating the risk of lateral movement once an attacker has entered an organization’s environment, they do little to address the risks of an upstream software vendor being breached in the first place. Given that organizations can’t control the internal security practices and processes of third-party vendors, Cohen argues customers need to “choose vendors with a proven security track record.” Gartner suggests that organizations can test the security standing of a vendor by conducting due diligence in the form of risk assessments, not just prior to signing a contract with a third party, but throughout the entire commercial relationship. As part of the risk assessment, an organization should request internal audits and risk reports, issue questionnaires, and analyze broader industry data (e.g., does the organization belong to an industry at higher risk of cyberattacks) to quantify the level of risk presented by a commercial partnership. It’s also useful to review what regulations the organization is compliant with and verifying proof of any certifications issued by third-party standard assessment organizations, such as the ISO , to better understand the level of controls implemented within the environment. While due diligence alone won’t mitigate third-party risk completely, it can help enterprises screen out vendors with less-defined or effective security procedures. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,199
2,023
"New Starburst integration unlocks cross-platform data transformations for dbt users | VentureBeat"
"https://venturebeat.com/data-infrastructure/new-starburst-integration-unlocks-cross-platform-data-transformations-for-dbt-users"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages New Starburst integration unlocks cross-platform data transformations for dbt users Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Boston-based data lake analytics company Starburst today announced an integration with transformation tool dbt Cloud to help users of the platform build data pipelines spanning multiple data sources via one central plane. The integration, which is now live as a dedicated adapter inside dbt Cloud, connects to Starburst’s SaaS offering Starburst Galaxy. It comes as a much-needed solution to federate data assets for enterprises that continue to juggle highly distributed data environments. Starburst says the connection is easy to deploy and can be up and running in a matter of minutes. How does Starburst Galaxy help with data transformations? Starburst Galaxy is the cloud-native and fully managed service of Starburst’s massively parallel processing (MPP) query engine. It allows enterprise users to query a variety of data sources, or join data across multiple data sources through a single query. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With the latest integration, dbt users can use this particular capability of the SaaS platform to transform all of their data assets, regardless of where they reside. This essentially means no more need to prepare and move data via manually configured ETL pipelines — which can be cumbersome, expensive and prone to risks. “By integrating its federated query engine with dbt’s transformation engine, Starburst aims to help data teams increase the amount of data they prepare for analytics projects. dbt users can query data in distributed locations, then clean, model, test, deliver and document those datasets for consumption. There are more than 50,000 users of the open-source dbt tool, so it’s a significant addressable market,” Kevin Petrie, VP of research at Eckerson Group, tells VentureBeat. To use the integration, all one has to do is create a new dbt Cloud project, select Starburst as the data platform, enter credentials and connect. As soon as the authentication is done, one can start using Starburst’s query engine to transform distributed data. “Users have to write queries as normal, using SQL JOINs between data from multiple sources while Starburst intelligently determines where to send requests,” Matt Fuller, co-founder and VP of product at Starburst, told VentureBeat. He emphasized that part of the power of this integration is how easy it is to implement and use. Goal to maximize data coverage While global enterprises continue to shift towards centralized data warehouses , a large number of companies still have many data assets spread across multiple distributed platforms, including on-prem databases and object storage. The new dbt-Starburst integration makes sure that these data assets are also prepared and utilized for analytics and machine learning projects. “This integration addresses the needs of the enterprise customer base, helping them get the most out of their existing systems and extending dbt’s analytics engineering workflow platform to new cloud-first use cases without additional operational overhead,” Harrison Johnson, head of technology partnerships at Starburst, said. The trend of having highly distributed data environments is expected to continue in the near future, making solutions like this valuable for data engineers and data analytics engineers, Petrie said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,200
2,023
"How blockchain technology is paving the way for a new era of cloud computing | VentureBeat"
"https://venturebeat.com/data-infrastructure/how-blockchain-technology-is-paving-the-way-for-a-new-era-of-cloud-computing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How blockchain technology is paving the way for a new era of cloud computing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The cloud infrastructure space is on the brink of a revolutionary change with the advent of blockchain technology. Blockchain’s decentralized nature and exceptional fault tolerance make it an ideal solution for record management tasks like financial transactions, identity management, provenance and authentication. Blockchain technology offers enhanced network security, data privacy and decentralization; the cloud provides high scalability and elasticity. The convergence of cloud and blockchain has the potential to create innovative solutions that will revolutionize the tech industry. Blockchain’s emergence in the cloud infrastructure space is poised to shatter the conventional model of centralized cloud providers. The decentralized model promises to transform applications and data hosting for developers and businesses alike. In addition, this development has the potential to significantly impact data latency on the internet, offering improved performance and reliability. As changing business demands drive the tech evolution, blockchain’s adoption in cloud/edge computing will play a crucial role in shaping the future of the digital landscape. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The need for a decentralized cloud model Cloud computing has become an integral part of modern life, thanks to the internet, the digital revolution and advanced technologies that make our lives easier and more convenient. While cloud computing has revolutionized data management and storage, it has several areas for improvement that stem from its centralized nature. “Having a main central server makes data easily hackable. And due to monopolies of a few providers, user data is easily available and can be manipulated for business purposes. Additionally, centralized data management is reaching its saturation point, which leads to expensive data storage options for users down the road,” Anantha Krishnan, CEO of Sarvalabs and founder of MOI, a context-aware P2P protocol, told VentureBeat. These issues highlight the need for a decentralized approach to cloud computing, one where the power is in the hands of the users and data is not vulnerable to manipulation or hacking. The emergence of blockchain technology has paved the way for a new era of decentralized cloud computing and storage, presenting a viable alternative to traditional, centralized architectures. Decentralized cloud computing promises to address these shortcomings and usher in a new era of data management and storage that is secure, transparent and accessible to all. Unlike centralized storage systems that rely on multiple servers hosted in a centralized database, decentralized storage involves distributing data across multiple computers (i.e., nodes) connected on a peer-to-peer network. Immutability and data provenance with blockchain “Blockchain itself can be used within a private ‘walled garden’ as well,” Ian Foley, chief business officer at data storage blockchain firm Arweave , told VentureBeat. “It is a technology structure that brings immutability and maintains data provenance. Centralized cloud vendors are also developing blockchain solutions, but they lack the benefits of decentralization. Decentralized cloud infrastructures are always independent of centralized environments, enabling enterprises and individuals to access everything they’ve stored without going through a specific application.” Decentralized storage platforms use the power of blockchain technology to offer transparency and verifiable proof for data storage, consumption and reliability through cryptography. This eliminates the need for a centralized provider and gives users greater control over their data. With decentralized storage, data is stored in a wide peer-to-peer (P2P) network, offering transfer speeds that are generally faster than traditional centralized storage systems. In addition, the entire process is handled by nearby peers, rather than servers hosted in a physical location, enabling users to transfer and access their data with greater ease and efficiency. “Instead of relying on today’s content delivery networks (CDNs), content addressability can be used to get data automatically from the closest peer in a large P2P network,” said Krishnan. “The Web3 sector is moving toward fast performance and secure interactions, which can enhance traditional sectors immensely.” Tackling AI security challenges through decentralization The decentralized nature of blockchain, which relies on a user’s private key instead of a cloud administrator’s, provides an advantage in terms of ownership and control of data. With privacy concerns at an all-time high, many tech enthusiasts are calling for a transition from the current Web2 to the more decentralized and secure Web3. Through blockchain, Web3 offers consumers greater autonomy and data sovereignty, which is becoming increasingly critical as we enter the era of mass-scale artificial intelligence (AI). “It is easy to crawl social media and data online for use in training AI algorithms. With Web3, we envision the licensing and permitted uses being embedded directly into the data as it is stored. This will enable users to prevent, for example, their photos from being used to morph into an AI-generated image,” said Foley. ”This feature is a significant step forward in safeguarding privacy, and it’s one of the reasons why blockchain technology for cloud is gaining traction.” Blockchain offers many other security benefits. Its ledger system prevents anyone from deleting or tampering with data by creating an unalterable record of the original source. This feature is crucial for maintaining data integrity, and it’s a major reason why blockchain-based cloud systems are being adopted in various industries, from finance to healthcare. A future of opportunities for decentralized cloud computing Foley believes the advent of decentralized blockchain/cloud technology will initiate a transformative phase of consent-driven data exchange, further escalating the innovation rate. “With permissions and usage restrictions on data and content, as well as the breadth of data needed to avoid bias in future algorithms, blockchain-based decentralized cloud infrastructure is poised to explode with growth,” he said. “I absolutely believe in this emerging technology’s vision, and it comes from all the benefits of decentralization and of blockchain data’s immutability and provenance.” For his part, Krishnan said blockchain-based cloud infrastructure is here to stay. “Progress will be around creating million-node decentralized P2P cloud structures that are accessed in a simple manner. The real opportunity is the ability to enable the emerging digital society to be democratic, sustainable and equitable. Empowering individuals and allowing them to control their digital experiences the way they want (instead of controlling as it is today) is exciting,” said Krishnan. “Blockchain-based cloud services enable this by creating user-controlled infrastructure and data management layers in a shared infrastructure.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,201
2,023
"Translated Unveils Adaptive Machine Translation Service in 200 Languages | VentureBeat"
"https://venturebeat.com/business/translated-unveils-adaptive-machine-translation-service-in-200-languages"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Translated Unveils Adaptive Machine Translation Service in 200 Languages Share on Facebook Share on X Share on LinkedIn The new technology provides commercial support for the broadest range of languages ever supported, potentially reaching 6.5 billion native speakers. ROME–(BUSINESS WIRE)–April 27, 2023– Translated, a global leader in language translation, has announced a significant expansion of its machine translation services. Previously supporting 56 languages, the company’s adaptive machine translation technology, ModernMT, now supports 200 languages, setting a new benchmark in the industry. No other commercial service currently supports such an extensive range of languages. Translated’s ModernMT was recently recognized as a leader in machine translation by both IDC and CSA Research , further demonstrating its high quality and reliability. By expanding its language coverage to potentially reach 6.5 billion native speakers, Translated empowers enterprises to forge stronger connections with global users and customers, enabling seamless communication and understanding. Effective today, enterprises can now access the new offering via an API, and professional translators can benefit from the expanded language support using a plugin for their Computer-Assisted Translation (CAT) tools, such as Matecat. The adaptive models are designed to continuously improve translation quality by incorporating corrections from professional translators in real time. For the first time ever, 30 new languages are supported in the market, leapfrogging directly to the most capable adaptive technology. Among the new languages now supported by ModernMT are Bengali, Punjabi, and Javanese, helping us reach over 2 billion more native speakers worldwide. This significant expansion has been made possible thanks to the exponential growth of research in AI and the efforts of non-profit organizations like Common Crawl&& and Opus, in addition to the transparency of Meta’s language research. Marco Trombetti, CEO of Translated, commented on the company’s milestone: “ Today, we are not merely providing a tool that supports translation in more languages. Our adaptive models empower professional translators to handle more content, while their corrections contribute to the continuous enhancement of machine translation quality across these languages. We believe this collaborative approach will aid in increasing translation quality and help preserve many endangered languages, showcasing a powerful and sustainable synergy between humans and machines. “ For more information on Translated’s groundbreaking adaptive machine translation service, visit www.modernmt.com View source version on businesswire.com: https://www.businesswire.com/news/home/20230427005517/en/ Press Contact Silvio Gulizia Head of Content Mail: [email protected] Mob: +39 393.1044785 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,202
2,023
"ODDITY Invests $100M to Bring Pharma's AI Based Molecule Discovery Technology to Beauty and Wellness. | VentureBeat"
"https://venturebeat.com/business/oddity-invests-100m-to-bring-pharmas-ai-based-molecule-discovery-technology-to-beauty-and-wellness"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release ODDITY Invests $100M to Bring Pharma’s AI Based Molecule Discovery Technology to Beauty and Wellness. Share on Facebook Share on X Share on LinkedIn With the acquisition of Boston-based Revela, a leading biotech startup, and the establishment of ODDITY LABS, the company is at the vanguard, bringing a proven drug discovery technology into beauty and wellness. NEW YORK–(BUSINESS WIRE)–April 27, 2023– ODDITY, the consumer technology platform built to transform the global beauty and wellness market, today announces the $76M acquisition of Boston-based Revela, an industry leading biotechnology startup and forerunner in Artificial Intelligence-based molecule discovery for beauty and wellness indications. With the acquisition, the company will establish ODDITY LABS in Boston with an additional $25M investment for its frontier lab. The business combination will boost the development and expansion of proprietary, science-backed, clinically tested, and highly efficacious products, for the benefit of consumers around the world. This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20230427005130/en/ Dr. David Zhang, Shiran Holtzman Erel, Oran Holtzman and Dr. Evan Zhao (Photo: Business Wire) “ODDITY is deploying its strong balance sheet to continue reinventing the beauty and wellness market and cement our competitive advantage and technology moat.” said Oran Holtzman, ODDITY’s co-founder and CEO. “With 40M+ users, 1B+ data points and dozens of machine learning algorithms, we believe ODDITY is years ahead of our industry when it comes to technology. With the acquisition of Revela, we are doubling down on innovation but this time around science-backed product development and bringing proven pharma-grade technology to beauty and wellness. Not just for one ingredient or to solve one pain point – but with a platform that spans categories, use cases, and form factors. We believe the combination of ODDITY’s dominant direct to consumer platform and scaling machine with Revela’s biotechnology will be a game changer for the industry. Together we will redefine product efficacy via AI-based molecule discovery engines for the benefit of consumers worldwide. Revela’s multi-category pipeline will be launched via ODDITY’s current and future brands to support future growth.” Founded in 2021, Revela is revolutionizing beauty and wellness as one of the industry’s most advanced biotechnology platforms, and a leader in AI-based molecule discovery. “Advances in computation and molecular biology have ushered in a new era of discovery and development. We can now far better understand the biological pathways that drive the behavior of cells, and we can also identify novel molecules that influence that behavior to deliver desired outcomes, in ways that were never possible before,” says Dr. Evan Zhao, Revela co-founder and CEO, “We believe this technology, which is already widely used in pharma for drug discovery, allows us to create targeted and highly efficacious products across a unlimited range of consumer needs, and do it with a speed and efficiency that the industry has never seen before”. Revela already has patent-pending molecule ingredients proven to have significant step-wise improvements in efficacy for skin and hair based on clinical testing. Revela has hundreds of molecules in its development pipeline today, spanning a wide range of beauty and wellness applications. Its existing suite of new molecules will be integrated into ODDITY’s current and future brands, and further developed to find next generation solutions. “We founded Revela to change the beauty and wellness industry, which has stagnated from underinvestment in science and technology,” says Dr. Zhao, founder and CEO of Revela, “While biotechnology has broken new ground in molecule discovery, beauty and wellness has fallen behind for decades, and is serving consumers repackaged versions of sub-optimal ingredients that have been used for decades. Together with ODDITY and its technology platform and online capabilities, we believe we will change this paradigm and redefine the industry.” The transaction was signed and is subject to the satisfaction of customary closing conditions, including satisfaction of applicable regulatory waiting periods. The transaction is expected to close in the second quarter of 2023. ODDITY LABS With the acquisition of Revela, ODDITY will build ODDITY LABS in Boston, MA with a $25M investment for its frontier lab. ODDITY LABS will be a biotechnology research and development center to power ODDITY’s product innovation for the future, through the discovery and development of molecules, probiotics, peptides, and other biological modalities. It is being built to revolutionize the market through patented, proprietary technologies and capabilities, including AI-based molecule discovery, and what we believe is the world’s most advanced phenotypic database to understand the biological mechanisms that drive cellular behavior for beauty and wellness indications. ODDITY LABS will be led by the Revela founding team. Dr. Zhao joins ODDITY as Chief Scientific Officer. Dr. David Zhang, Revela’s CSO, joins as Head of Bioengineering and Mr. Avi Boppana, Revela’s CTO, joins as Head of Platform. “We have seen the transformational powers of biotechnology and artificial intelligence on drug discovery, and are unleashing these technologies in the beauty and wellness space with massive investment,” said Holtzman. “While our competitors are stuck in the past, adding Revela will allow us to build the future of R&D for the category, just as we did with our technology center in Tel Aviv.” AI-Based Molecule Discovery AI-based molecule discovery is a transformative frontier in product development, enabled by the advancements of technologies including synthetic biology, genomic sequencing, robotics, and artificial intelligence. The technological approach is already proven and widely used in the field of biotechnology for drug discovery. Revela is the forerunner in implementing and scaling AI-based molecule discovery for beauty and wellness, which allows Revela to identify small molecules that are both highly efficacious and safe, and to do it cost efficiently, with accelerated lead times. This multi-step process leverages biological and computational technologies to drive discovery and optimization: High throughput screening of tens of thousands of molecules is conducted using biological assays Results are fed through an ensemble of deep learning models to scale screening across millions of potential molecules Lead molecules are identified with in vitro validation Comprehensive safety testing is conducted both in silico and ex-vivo prior to product development Human clinical testing is conducted to validate results After a winner is identified, molecules are continuously optimized to find the best-in-class, using medicinal chemistry and biological mechanism-based optimization through RNA sequencing, molecular docking, and molecule representation algorithms. Revela’s Ingredients that are already In-Market ProCelinyl ProCelinyl is a novel small molecule that supports the dermal papilla cells and brings hair follicles to a more vibrant state. Procelinyl’s powerful efficacy has been validated in clinical studies. In consumer studies, formulations with this ingredient have consistently outperformed market leading competitors in terms of user satisfaction and efficacy – not just in hair, but also in brow and lash. We believe the ingredient has the potential to redefine the entire hair growth category, which has not seen meaningful innovation in 40 years. Fibroquin Fibroquin is a novel small molecule that supports the pro-collagen pathway in skin to promote skin elasticity and plumpness. In a human clinical study, subjects using the Fibroquin essence had a 2X improvement in skin elasticity compared to subjects using a gold-standard 0.5% retinol serum, in just 8 weeks. Fibroquin formulas resulted in superior anti-aging effects to a best-in-class retinol, without the irritation effects of retinol. About Revela Revela is a biotechnology platform that discovers new molecules for consumer wellness applications using AI for molecular discovery and advanced biological models. Revela’s world-class team has expertise out of Harvard and MIT in synthetic biology, high-throughput screening, assay and model development, and machine learning. Revela is backed by Khosla Ventures, Maki.VC, Montage Ventures, and other top venture funds. Revela was founded by Dr. Evan Zhao, Dr. David Zhang, Avi Boppana, and Ying Tung (Evelyn) Chen. Dr. Evan Zhao, CEO and co-founder of Revela, is a synthetic biologist with a long history of utilizing the newest biotechnologies to disrupt stagnant spaces. He received his B.S. in Chemical Engineering from Caltech and Ph.D. in Chemical Engineering from Princeton where he pioneered the use of optogenetics in metabolic engineering and protein therapeutics, and was a Maeder Graduate Fellow in Energy and the Environment. Evan is lead author of publications in Nature , Nature Biotechnology , Nature Chemical Biology , Nature Communications, etc. He was most recently a Schmidt Science Fellow (~20 postdoctoral scientists selected by the Rhodes Trust annually) at the Wyss Institute for Biologically Inspired Engineering at Harvard University. There, Evan developed a revolutionary approach to selectively activate therapeutics based on the RNA of the cell of interest. Dr. David Zhang, CSO and co-founder of Revela, is an expert in immunology and bioengineering with a record of developing translational technologies. He has published in top journals including Nature Biotechnology, Nature Materials, and Nature Communications. He has a B.S. from McGill University, a M.A.Sc from the University of Toronto, and a PhD from Harvard University, where he developed next-generation cancer therapy technologies with one of Revela’s scientific advisors – Professor David Mooney, the Robert P. Pinkas Family Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences at Harvard University and a leader in the fields of immuno-engineering and mechanotransduction. These technologies are currently licensed to a clinical-stage biopharma company. Avinash Boppana, CTO and co-founder of Revela, is a computer scientist with expertise in computational methods for molecule discovery and systems biology. He has a B.S.E from Princeton University where he created novel algorithms for genomics-informed therapeutic development, and has published in top journals including Nature Cell Biology, etc. He has previously worked with the Harvard Medical School, the NIH, and with Professor Connor Coley, the Henri Slezynger (1957) Career Development Assistant Professor in Chemical Engineering & Electrical Engineering and Computer Science at the Massachusetts Institute of Technology – Revela’s principal scientific advisor and a Forbes’ 30 under 30 scientist leading the computational chemical engineering space. About ODDITY ODDITY is a consumer tech company which builds and scales digital-first brands to disrupt the offline-dominated beauty and wellness industries. The company owns IL MAKIAGE and SpoiledChild. ODDITY, with a US HQ in New York City and an R&D center in Tel Aviv, Israel, has built one of the industry’s most advanced technology platforms, which leverages data science, machine learning and computer vision capabilities to deliver a better online experience for consumers. The company uses cutting-edge data science to identify consumer needs and develops solutions in the form of beauty, wellness, and tech products. As part of its technology platform, which currently serves its own brands and over 40 million users, ODDITY has developed several key consumer-facing technology products, including: POWERMATCH – an AI- and Machine Learning-driven matching engine to deliver consumers the perfectly-matched products for them Kenzza – a patented creator-powered in-house media platform that represents one of the largest libraries of bespoke beauty media content in the world Hyperspectral Vision – A patented hyperspectral image recovery software, catapulting the company into the forefront of innovation in computer vision SpoiledBrain – an AI- and Machine Learning-driven matching engine to pair consumers with wellness products The company is developing tools to offer its technology products to external companies. View source version on businesswire.com: https://www.businesswire.com/news/home/20230427005130/en/ Michael Braun [email protected] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,203
2,023
"Azuba Healthcare Platform Now Available to Payers | VentureBeat"
"https://venturebeat.com/business/azuba-healthcare-platform-now-available-to-payers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Press Release Azuba Healthcare Platform Now Available to Payers Share on Facebook Share on X Share on LinkedIn One click per year enables clinical data sharing between health apps, providers, payers and patients/caregivers CHICAGO–(BUSINESS WIRE)–April 27, 2023– Azuba Corporation, www.azuba.com , announced the launch of their HIPAA-compliant Azuba Healthcare Platform with broad interoperability automation for all the 350,000+ health apps in the healthcare space at the annual HIMSS23 conference. This broad interoperability solution empowers patients and their caregivers to locate their clinical data wherever it may be stored and to automatically transfer this clinical data to secure, aggregated cloud storage. This aggregated data is then available to be shared with each patient’s doctors, payers, and other caregivers for the duration of each patient’s lifetime. This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20230425006225/en/ Bart Carlson, CEO & Founder, Azuba Corporation (Photo: Business Wire) Patients may also see doctors in many different geographies over the course of their lifetime. As a result, any solution that allows a patient to connect to all their doctors’ records and to keep their records complete and up to date after every appointment represents the pinnacle of health data interoperability desired by the Office of the National Coordinator for Health Care IT ( https://www.healthit.gov/topic/about-onc# ). The single-click automated Azuba Healthcare Platform is designed to replace a patient’s need to connect to each physician’s data repository after every visit and then share that new data with all of their other physicians in order to keep their records up to date everywhere. The Azuba Healthcare Platform is now available to US-based payers (health insurers) for delivery to their tens of millions of members nationwide. The purpose of the Azuba Healthcare Platform is to provide health apps, doctors, payers and patients/caregivers with easy and accurate access to all clinical data for each patient. Deployment of the Azuba Healthcare Platform enables doctors to deliver better care while reducing the risk of dangers to lives and to unnecessary financial expenditures posed by missing and misunderstood clinical data. The Azuba Healthcare Platform also reduces the probability of poor clinical diagnostic decisions, gaps in care, and excessive or insufficient testing with less-than-optimum medical outcomes. The Azuba Healthcare Platform is not an app itself, nor does it replace existing electronic medical records systems, but rather provides a centralized means, including resources such as data structures, data transformation processes and API’s, to store clinical data in the cloud, and to communicate, validate, standardize, simplify, translate, and compose for ease of understanding all the patient’s clinical data. Deployment of the Azuba Healthcare Platform enables patient clinical data to be readily accessed and used by all stakeholders, including the existing health apps in the market today. With full implementation across the healthcare system, the Azuba Healthcare Platform has the potential to save hundreds of thousands of lives and up to $500 billion annually due to medical treatment errors and omissions in the U.S. alone. Azuba Corporation, newly formed in Delaware, is the successor organization to Napersoft, Inc. which has been supplying health insurers with enterprise software for the past 36 years. Azuba recently merged with Napersoft, Inc. including its management, development and support teams, its intellectual property, and its longtime client relationships. These assets and experience provide the foundation for the new platform announced at HIMSS23. Azuba CEO Bart Carlson stated, “This announcement has roots in my appointment to the Blue Button Committee in 2011 under joint initiative with the Department of Health and Human Services, Center for Medicare & Medicaid Services (CMS) and the Department of Veterans Affairs (VA) to give patients full access to their health data. Because of the difficulty of the task, the rapid evolution of the healthcare IT environment, the appearance of hundreds of thousands of health-related apps, and the tendency of stakeholders to protect siloed data, the full promise of the Blue Button Committee has until now remained unfulfilled.” “Because of our predecessor Napersoft’s experience in enterprise-level patient clinical data and multi-language communication capabilities, we are perfectly positioned to undertake this challenge. It’s exciting to make this announcement after almost five years of analysis and an additional five years of development integrating the needs of app developers, providers, caregivers, insurers, and their millions of members, while respecting regulatory requirements, generally accepted data interoperability standards, and existing clinical data systems.” “Together with the rest of my exemplary team, with thousands of app developers, a like number of provider organizations, and hundreds of health insurers and other payers, we can now begin the process of saving many lives as well as our society’s limited financial resources,” continued Mr. Carlson. Azuba Corporation, based in Naperville, IL, is a world leader in health IT interoperability. Its predecessor organization, Napersoft, Inc., also founded by Bart Carlson, provides customer communications management software that enables health insurers to communicate in over 100 languages to members/patients, caregivers, and doctors/hospitals. Azuba software solutions are currently used to create and manage approximately fifty percent (50%) of all payer communications in the U.S. The newly announced Azuba Healthcare Platform is a cloud-based national enterprise lifetime clinical records platform with current access to more than 90% of all the digital clinical data in the U.S. Through this platform, with just one click annually, patients can grant permission to automatically keep all their clinical data up to date in their Azuba Lifetime Clinical Master Record, all their doctors/hospital EHR systems and all their health apps. And, with all the patient’s health providers using up to date clinical records as a basis for diagnosis and treatment decision making, we are optimistically looking forward to healthier patients with fewer medical appointments and procedures and significant overall reductions in claims costs by health payers. View source version on businesswire.com: https://www.businesswire.com/news/home/20230425006225/en/ Robert Steven Kramarz Chief Capital Officer Azuba Corporation [email protected] Mobile: 760.607.7655 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,204
2,023
"We must perfect predictive models for generative AI to deliver on the AI revolution | VentureBeat"
"https://venturebeat.com/ai/we-must-perfect-predictive-models-for-generative-ai-to-deliver-on-the-ai-revolution"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest We must perfect predictive models for generative AI to deliver on the AI revolution Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Throughout 2022, generative AI captured the public’s imagination. With the release of Stable Diffusion, Dall-E2 , and ChatGPT-3 , people could engage with AI first-hand, watching with awe as seemingly intelligent systems created art, composed songs, penned poetry and wrote passable college essays. Only a few months later, some investors have begun narrowing their focus. They’re only interested in companies building generative AI, relegating those working on predictive models to the realm of “old school” AI. However, generative AI alone won’t fulfill the promise of the AI revolution. The sci-fi future that many people anticipate accompanying the widespread adoption of AI depends on the success of predictive models. Self-driving cars, robotic attendants, personalized healthcare and many other innovations hinge on perfecting “old school” AI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Generative AI’s great leap forward? Predictive and generative AI are designed to perform different tasks. Predictive models infer information about different data points so that they can make decisions. Is this an image of a dog or a cat? Is this tumor benign or malignant? A human supervises the model’s training, telling it whether its outputs are correct. Based on the training data it encounters, the model learns to respond to different scenarios in different ways. Generative models produce new data points based on what they learn from their training data. These models typically train in an unsupervised manner, analyzing the data without human input and drawing their own conclusions. For years, generative models had the more difficult tasks, such as trying to learn to generate photorealistic images or create textual information that answers questions accurately, and progress moved slowly. Then, an increase in the availability of compute power enabled machine learning (ML) teams to build foundation models: Massive unsupervised models that train vast amounts of data (sometimes all the data available on the internet). Over the past couple of years, ML engineers have calibrated these generative foundation models — feeding them subsets of annotated data to target outputs for specific objectives — so that they can be used for practical applications. Fine-tuning AI ChatGPT-3 is a good example. It’s a version of Chat GPT, a foundation model that’s trained on vast amounts of unlabeled data. To create ChatGPT, OpenAI hired 6,000 annotators to label an appropriate subset of data, and its ML engineers then used that data to fine tune the model to teach it to generate specific information. With these sorts of fine-tuning methods, generative models have begun to create outputs of which they were previously incapable, and the result has been a swift proliferation of functional generative models. This sudden expansion makes it appear that the generative AI has leapfrogged the performance of existing predictive AI systems. Appearances, however, can be deceiving. The real-world use cases for predictive and generative AI When it comes to current real-world use cases for these models, people use generative and predictive AI in very different ways. Predictive AI has largely been used to free up people’s time by automating human processes to perform at very high levels of accuracy and with minimal human oversight. In contrast, the current iteration of generative AI is mostly being used to augment rather than replace human workloads. Most of the current use cases for generative AI still require human oversight. For instance, these models have been used to draft documents and co-author code, but humans are still “in the loop,” reviewing and editing the outputs. At the moment, generative models haven’t yet been applied to high-stakes use cases, so it doesn’t matter much if they have large error rates. Their current applications, such as creating art or writing essays, don’t carry much risk. If a generative model produces an image of a woman with eyes too blue to be realistic, what harm is really done? Predictive AI has real-world impact Many of the use cases for predictive AI, on the other hand, do carry risks that can have very real impact on people’s lives. As a result, these models must achieve high-performance benchmarks before they’re released into the wild. Whereas a marketer might use a generative model to draft a blog post that’s 80% as good as the one they would have written themselves, no hospital would use a medical diagnostic system that predicts with only 80% accuracy. While on the surface, it may appear that generative models have taken a giant leap forward in terms of performance when compared to their predictive counterparts, all things equal, most predictive models are actually required to perform at a higher level of accuracy because their use cases demand it. Even lower-stakes predictive AI models, such as email filtering, need to meet high-performance thresholds. If a spam email lands in a user’s inbox, it’s not the end of world, but if an important email gets filtered directly to spam, the results could be severe. The capacity at which generative AI can currently perform is far from the threshold required to make the leap into production for high-risk applications. Using a generative text-to-image model with likely error rates to make art may have enthralled the general public, but no medical publishing company would use that same model to generate images of benign and malignant tumors to teach medical students. The stakes are simply too high. The business value of AI While predictive AI may have recently taken a backseat in terms of media coverage, in the near-to medium-term, it’s still these systems that are likely to deliver the greatest value for business and society. Although generative AI creates new data of the world, it’s less useful for solving problems on existing data. Most of the urgent large-scale problems that humans need to solve require making inferences about, and decisions based on, real world data. Predictive AI systems can already read documents, control temperature, analyze weather patterns, evaluate medical images, assess property damage and more. They can generate immense business value by automating vast amounts of data and document processing. Financial institutions, for instance, use predictive AI to review and categorize millions of transactions each day, saving employees from this time and labor-intensive tasks. However, many of the real-world applications for predictive AI that have the potential to transform our day-to-day lives depend on perfecting existing models so that they achieve the performance benchmarks required to enter production. Closing the prototype-production performance gap is the most challenging part of model development, but it’s essential if AI systems are to reach their potential. The future of generative and predictive AI So has generative AI been overhyped? Not exactly. Having generative models capable of delivering value is an exciting development. For the first time, people can interact with AI systems that don’t just automate but create — an activity of which only humans were previously capable. Nonetheless, the current performance metrics for generative AI aren’t as well defined as those for predictive AI, and measuring the accuracy of a generative model is difficult. If the technology is going to one day be used for practical applications — such as writing a textbook — it will ultimately need to have performance requirements similar to that of generative models. Likewise, predictive and generative AI will merge eventually. Mimicking human intelligence and performance requires having one system that is both predictive and generative, and that system will need to perform both of these functions at high levels of accuracy. In the meantime, however, if we really want to accelerate the AI revolution, we shouldn’t abandon “old school AI” for its flashier cousin. Instead, we need to focus on perfecting predictive AI systems and putting resources into closing the prototype-production gap for predictive models. If we don’t, ten years from now, we might be able to create a symphony from text-to-sound models, but we’ll still be driving ourselves. Ulrik Stig Hansen is founder and president of Encord. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,205
2,023
"Understanding the risks of generative AI for better business outcomes | VentureBeat"
"https://venturebeat.com/ai/understand-risks-generative-ai-better-business-outcomes"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Understanding the risks of generative AI for better business outcomes Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Any new technology can be an amazing asset to improve or transform business environments if used appropriately. It can also be a material risk to your company if misused. ChatGPT and other generative AI models are no different in this regard. Generative AI models are poised to transform many different business areas and can improve our ability to engage with our customers and our internal processes and drive cost savings. But they can also pose significant privacy and security risks if not used properly. ChatGPT is the best-known of the current generation of generative AIs, but there are several others, like VALL-E, DALL-E 2, Stable Diffusion and Codex. These are created by feeding them “training data,” which may include a variety of data sources, such as queries generated by businesses and their customers. The data lake that results is the “magic sauce” of generative AI. In an enterprise environment , generative AI has the potential to revolutionize work processes while creating a closer-than-ever connection with target users. Still, businesses must know what they’re getting into before they begin; as with the adoption of any new technology, generative AI increases an organization’s risk exposure. Proper implementation means understanding — and controlling for — the risks associated with using a tool that feeds on, ferries and stores information that mostly originates from outside company walls. Chatbots for customer services are effective uses of generative AI One of the biggest areas for potential material improvement is customer service. Generative AI-based chatbots can be programmed to answer frequently asked questions, provide product information and help customers troubleshoot issues. This can improve customer service in several ways — namely, by providing faster and cheaper round-the-clock “staffing” at scale. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Unlike human customer service representatives, AI chatbots can provide assistance and support 24/7 without taking breaks or vacations. They can also process customer inquiries and requests much faster than human representatives can, reducing wait times and improving the overall customer experience. As they require less staffing and can handle a larger volume of inquiries at a lower cost, the cost-effectiveness of using chatbots for this business purpose is clear. Chatbots use appropriately defined data and machine learning algorithms to personalize interactions with customers, and tailor recommendations and solutions based on individual preferences and needs. These response types are all scalable: AI chatbots can handle a large volume of customer inquiries simultaneously, making it easier for businesses to handle spikes in customer demand or large volumes of inquiries during peak periods. To use AI chatbots effectively, businesses should ensure that they have a clear goal in mind, that they use the AI model appropriately, and that they have the necessary resources and expertise to implement the AI chatbot effectively — or consider partnering with a third-party provider that specializes in AI chatbots. It is also important to design these tools with a customer-centric approach, such as ensuring that they are easy to use, provide clear and accurate information, and are responsive to customer feedback and inquiries. Organizations must also continually monitor the performance of AI chatbots using analytics and customer feedback to identify areas for improvement. By doing so, businesses can improve customer service, increase customer satisfaction and drive long-term growth and success. You must visualize the risks of generative AI To enable transformation while preventing increasing risk, businesses must be aware of the risks presented by use of generative AI systems. This will vary based on the business and the proposed use. Regardless of intent, a number of universal risks are present, chief among them information leaks or theft, lack of control over output and lack of compliance with existing regulations. Companies using generative AI risk having sensitive or confidential data accessed or stolen by unauthorized parties. This could occur through hacking, phishing or other means. Similarly, misuse of data is possible: Generative AIs are able to collect and store large amounts of data about users, including personally identifiable information; if this data falls into the wrong hands, it could be used for malicious purposes such as identity theft or fraud. All AI models generate text based on training data and the input they receive. Companies may not have complete control over the output, which could potentially expose sensitive or inappropriate content during conversations. Information inadvertently included in a conversation with a generative AI presents a risk of disclosure to unauthorized parties. Generative AIs may also generate inappropriate or offensive content, which could harm a corporation’s reputation or cause legal issues if shared publicly. This could occur if the AI model is trained on inappropriate data or if it is programmed to generate content that violates laws or regulations. To this end, companies should ensure they are compliant with regulations and standards related to data security and privacy, such as GDPR or HIPAA. In extreme cases, generative AIs can become malicious or inaccurate if malicious parties manipulate the underlying data that is used to train the generative AI, with the intent of producing harmful or undesirable outcomes — an act known as “data poisoning.” Attacks against the machine learning models that support AI-driven cybersecurity systems can lead to data breaches, disclosure of information and broader brand risk. Controls can help mitigate risks To mitigate these risks, companies can take several steps, including limiting the type of data fed into the generative AI, implementing access controls to both the AI and the training data (i.e., limiting who has access), and implementing a continuous monitoring system for content output. Cybersecurity teams will want to consider the use of strong security protocols, including encryption to protect data, and additional training for employees on best practices for data privacy and security. Emerging technology makes it possible to meet business objectives while improving customer experience. Generative AIs are poised to transform many client-facing lines of business in companies around the world and should be embraced for their cost-effective benefits. However, business owners should be aware of the risks AI introduces to an organization’s operations and reputation — and the potential investment associated with proper risk management. If risks are managed appropriately, there are great opportunities for successful implementations of these AI models in day-to-day operations. Eric Schmitt is Global Chief Information Security Officer at Sedgwick. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,206
2,023
"Open-source Ray 2.4 upgrade speeds up generative AI model deployment | VentureBeat"
"https://venturebeat.com/ai/open-source-ray-2-4-upgrade-speeds-up-generative-ai-model-deployment"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Open-source Ray 2.4 upgrade speeds up generative AI model deployment Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The open source Ray machine learning (ML) technology for deploying and scaling AI workloads is taking a big step forward today with the release of version 2.4. The new release takes specific aim at accelerating generative AI workloads. Ray, which benefits from a broad community of open-source contributions, as well as the support of lead commercial vendor Anyscale , is among the most widely used technologies in the ML space. OpenAI , the vendor behind GPT-4 and ChatGPT, relies on Ray to help scale up its machine learning training workloads and technology. Ray isn’t just for training; it’s also broadly deployed for AI inference as well. The Ray 2.x branch first debuted in August 2022 and has been steadily improved in the months since, including the Ray 2.2 release, which focused on observability. With Ray 2.4, the focus is squarely on generative AI workloads, with new capabilities that provide a faster path for users to get started building and deploying models. The new release also integrates with models from Hugging Face , including GPT-J for text and Stable Diffusion for image generation. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! >>Follow VentureBeat’s ongoing generative AI coverage<< “Ray is basically providing the open-source infrastructure for managing the LLM [large language model] and generative AI life cycle for training, batch inference, deployment and the productization of these workloads,” Robert Nishihara, cofounder and CEO of Anyscale, told VentureBeat. “If you want everyone in every business to be able to integrate AI into their products, it’s about lowering the barrier to entry, reducing the level of expertise that you need to build all the infrastructure.” How Ray 2.4 is generating new workflows for generative AI The way that Ray 2.4 is lowering the barrier to building and deploying generative AI is with a new set of prebuilt scripts and configurations. Rather than users needing to configure and script each and every type of generative AI deployment manually Nishihara said Ray 2.4 users will be able to get up and running — out of the box. “This is providing a very simple starting point for people to get started,” he said. “They’re still going to want to modify it and bring their own data, but they will have a working starting point that is already getting good performance.” Nishihara was quick to note that what Ray 2.4 provides is more than just configuration management. A common way for many types of technologies to be deployed to today is with infrastructure-as-code tooling such as Terraform or Ansible. He explained the goal is not just about configuring and setting up the cluster to enable a generative AI model to run; with Ray 2.4, the goal is to actually provide runnable code for training and deploying and LLM. Functionally, what Ray 2.4 is providing is a set of Python scripts that a user would have otherwise needed to write on their own in order to deploy a generative AI model. “The experience you want developers to have is, it’s like one click and then you have an LLM behind some endpoint and it works,” he said. The Ray 2.4 release is targeting a specific set of generative AI integrations using open-source models on Hugging Face. Among the integrated model use cases is GPT-J, which is a small-scale text-generation model. There is also an integration for fine-tuning the DreamBooth image-generation model, as well as supporting inference for the Stable Diffusion image model. Additionally, Ray 2.4 provides integration with the increasingly popular LangChain tool, which is used to help build complex AI applications that use multiple models. A main feature of Ray is the Ray AI Runtime (AIR), which helps users to scale ML workflows. Among the AIR components is one called a trainer, which (not surprisingly) is designed for training. With Ray 2.4, there are a series of new integrated trainers for ML-training frameworks, including ones for Hugging Face Accelerate and DeepSpeed , as well as PyTorch Lightning. Performance optimizations in Ray 2.4 accelerate training and inference A series of code optimizations were made in Ray 2.4 that help boost performance. One of these is the handling of array data, which is a way that data is stored and processed. Nishihara explained that the common approach for handling data for AI training or inference is to have multiple, disparate stages where data is first processed, and then operations such as training or inference are executed. The challenge is that the pipeline for executing those stages can introduce some latency where compute and GPU resources are not being fully utilized. With Ray 2.4, instead of processing data in stages, Nishihara said the technology now streams and pipelines the data such that it all fits into memory at the same time. In addition, to keep the overall utilization as high as possible, there are optimizations for preloading some data onto GPUs. It’s not just about keeping GPUs busy, it’s also about CPUs busy too. “The processing you’re doing should run on CPUs and some of the processing you’re doing should run on GPUs,” Nishihara said. “You want to keep everything busy and scaled on both dimensions. That’s something that Ray is uniquely good at and that is hard to do otherwise.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,207
2,023
"New AI tools poised to revolutionize 3D engineering | VentureBeat"
"https://venturebeat.com/ai/new-ai-tools-poised-to-revolutionize-3d-engineering"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest New AI tools poised to revolutionize 3D engineering Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. After several decades of hope, hype and false starts, it appears that artificial intelligence (AI) has finally gone from throwing off sparks to catching fire. Tools like DALL-E and ChatGPT have seized the spotlight and the public imagination, and this latest wave of AI appears poised to be a game-changer across multiple industries. But what kind of impact will AI have on the 3D engineering space? Will designers and engineers see significant changes in their world and their daily workflows, and if so, what will those changes look like? Design assistance and simulation It’s like a law of the universe: AI brings value anywhere there is a massive volume of data, and the 3D engineering space is no exception. Thanks to the huge datasets that many engineering software vendors already have at their disposal — in many cases, we’re talking about millions of models — AI has a wealth of data to draw upon to provide design guidance and design optimization. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For example, if an architect is looking to put a certain type of room — a laundry room, maybe — inside a house or apartment with a certain floor plan, the AI within the building information modeling (BIM) software will have seen enough successful examples of this being done in other situations to know exactly how to seamlessly make it happen. AI can also start carrying some of the load when it comes to putting the finishing touches on a design. For example, a designer, by typing in a prompt to “make this building look more appealing,” could trigger AI to populate an architectural drawing with 3D models of just the right type of furniture, perhaps, or some perfectly manicured trees and hedges out front. All the designer has to do then is approve AI’s suggested additions. This kind of assistance will make the life of the actual human in front of the computer screen much, much easier. We’re going to see more and more of this type of role for AI, where it serves as an assistant embedded within computer-aided design (CAD) programs, working side-by-side with the designer and springing into action when called upon. Similarly, AI’s ability to learn from large datasets and make sense of what it learns can assist with simulation. In both design assistance and simulation, these capabilities are still evolving and will take years to realize their full potential, but they will gradually be able to take more and more items off of humans’ plates — greatly increasing the productivity of individual designers and engineers in the process. New ways of 3D visualization AI also has some interesting implications when it comes to reconstructions and digitizations of the physical world. Case in point: We’re now at a stage where we can feed AI a couple of basic 2D images of a particular building or object, and it will create a full 3D, volumetric representation (i.e., not just a superficial surface model) of that item. This is thanks to neural radiance fields (aka NeRFs), AI-powered rendering models that ingest multiple 2D viewpoints and then perform some internal calculations and extrapolations to generate a 3D model out of those 2D images. 3D object recognition Of course, as photos and 2D images become increasingly viable source material for the creation of a 3D model, point clouds — which are created by scanning objects or structures — will still remain a valuable source of data. AI has some great potential applications here as well. Similar to the way that AI has become quite adept at feature recognition in photos — identifying the furry, four-legged thing in a picture as a “dog” while identifying the rectangular object as a “couch” — it can bring similar capabilities to point cloud data, helping to pick out hyper-specific features within the sea of scanned points or triangles. Examples here could be the ability to identify the walls and ceiling of a scanned building or holes and other features in a complex CAD assembly. Amidst all these AI-assisted developments in visualization and object recognition, however, there are implications for the graphics capabilities of engineering software. Many products already manage both mesh and point clouds — but in the near future, they may have to manage NeRFs and other representations, all while finding a way for the different representations to coexist. Of course, that’s part of the beauty of game-changing innovations like AI: As its impact ripples through the larger ecosystem, other technologies must respond in kind to the new world it creates, spurring even more innovation. Uncertainty and opportunity in 3D engineering After a slow build, AI has reached a tipping point — and the 3D engineering world will feel its impact in ways that range from extensions of what is already possible to surprising new capabilities and functionality. For the designer or engineer, this is nothing to fear. While AI might be the number one game-changer in the coming years, bringing massive change and uncertainty, it also promises to change the game in interesting ways, helping designers and engineers to tackle their work and shape the world around us with greater efficiency, more creativity and new levels of virtuosity. Eric Vinchon is VP of Product Strategy at Tech Soft 3D. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,208
2,023
"How vector databases can revolutionize our relationship with generative AI | VentureBeat"
"https://venturebeat.com/ai/how-vector-databases-can-revolutionize-our-relationship-with-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How vector databases can revolutionize our relationship with generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Generative AI has received a lot of attention already this year in the tech world and beyond. Whether it’s ChatGPT’s prose or Stable Diffusion’s art , 2022 provided an insight into the potential for AI to disrupt creative industries. But behind the headlines, 2022 brought an even more important development in AI: the rise of the vector database. While their impacts are less immediately obvious, the adoption of vector databases could completely upend the way we interact with our devices, along with dramatically improving our productivity in a vast range of administrative and clerical tasks. Ultimately, vector databases will be essential infrastructure in bringing about the societal and economic changes promised by AI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But what is a vector database? To understand that, we have to make sense of the underlying problem it addresses: unstructured data. The database dilemma Databases are one of the software industry’s longest-lasting and most resilient verticals. The total spend on databases and database management solutions doubled from $38.6B in 2017 to $80B in 2021. And since 2020, databases have only further entrenched their position as one of the most rapidly growing software categories, owing to further digitization following mass shifts to remote working. However, the modern database is still constrained by a problem that has persisted for decades: the problem of unstructured data. This is the up to 80% of data stored globally that has not been formatted, tagged or structured in a way that allows it to be rapidly searched or recalled. For a simple analogy of structured vs. unstructured data, think of a spreadsheet with multiple columns per row. In this case, a row of “structured data” has all the relevant columns filled in, whereas a row of “unstructured data” does not. In the case of the unstructured entry, it may be that the data has been automatically imported into the first column of the row; someone now needs to break up that cell and populate data into relevant columns. Why is unstructured data a problem? In short, it makes it harder to sort, search, review and use information in a database. However, our understanding of unstructured data is relative to how data is usually structured. Missing tags or misaligned formatting means that unstructured entries can be missed in searches or incorrectly excluded/included from filtering. This introduces risks of error to many database operations, which we have to address through manually structuring the data. This often requires us to manually review unstructured entries. This doesn’t mean that the data itself is necessarily unstructured; it just requires more manual intervention than our usual means of data storing. We often hear about the burden of manual review with claims such as data scientists spending 80% of their time on data preparation. But in practice, this is something we all do to some extent, or at least live with the effects of. If you’ve had to wrestle with a file explorer to find something on your hard drive or spend lots of time screening out irrelevant search engine results, you’ve likely been hit by the unstructured data problem. This wasted time on manual formatting, reviewing and filtering is not a new or exclusively digital problem. For example, librarians manually arrange books according to the Dewey Decimal System. The unstructured data problem is just a digital version of a fundamental challenge with every record-keeping task humans have had since we invented writing: We need to classify information to store and use it. This is where vector databases prove particularly exciting. Rather than relying on distinct categories and lists to organize our records, vector databases instead place them on a map. Vectors and mapping Vector databases use a concept in machine learning and deep learning called vector embeddings. Vector embedding is a technique where words or phrases in a text are mapped to high-dimensional vectors, also known as word embeddings. These vectors are learned in such a way that semantically similar words are close together in the vector space. This representation allows deep neural networks to process textual data more effectively, and has proven very useful in a variety of natural language processing tasks such as text classification, translation and sentiment analysis. In the database context, vector embedding is effectively a numerical representation of a group of properties we want to measure. To create an embedding, we take a trained machine learning model and instruct it to monitor for those properties in entries in a dataset. In the case of a text string, for example, the model could be told to log the average word length, sentiment analysis scores, or occurrence of specific words. The final embedding takes the form of a series of numbers corresponding to the “scores” logged in the audit of properties. A vector database takes the scores of the vector embeddings and plots them on a graph. Every property we measure in a vector embedding constitutes a dimension of the graph, resulting in it usually having many more than the three dimensions we can conventionally visualize. With all this information plotted, we can still calculate how “far” away any one embedding is from another embedding in the same way we can in any other graph. Perhaps more importantly, we can engage in a novel way of searching data. By generating a vector embedding of an inputted search query, we plot a point on the graph we want to target. Then, we can discover the embeddings that are the nearest to our search point. Vector embeddings are not a perfect solution for everything. They are typically learned in an unsupervised manner, making it difficult to interpret their meaning and how they contribute to the overall model performance. Pre-trained embeddings can also contain biases present in the training data, such as gender, racial or political biases, which can negatively impact model performance. The potential of vector search A vector database doesn’t rely on tags, labels, metadata or other tools typically used to structure data. Instead, because a vector embedding can track any property we deem relevant, vector databases allow us to obtain search results based on overall similarity. Whereas current searches of unstructured data involve manual reviewing and interpreting, vector databases will allow searches to actually reflect the meaning behind our queries rather than superficial properties like keywords. This change stands to revolutionize data handling, record-keeping and most administrative work and clerical tasks. Because of the reduction in “false positive” search results and a reduced need to pre-screen and format queries to a system, vector databases can dramatically boost the productivity and efficiency of just about any job in the knowledge economy. Aside from gains in administrative productivity, these advanced search capabilities will allow us to rely on databases to engage more effectively with creative and open-ended queries. This is an ideal complement to the rise of generative AI. Because vector databases reduce the need to structure data, we can substantially speed up training times for generative AI models by automating much of the work around processing unstructured data for training and production. As a result, many organizations can simply import their unstructured data into a vector database and tell it what properties they want to be measured in their embeddings. With those embeddings generated, an organization can rapidly train and deploy a generative model by simply letting it search the vector database to gather information for tasks. The vector database is set to dramatically improve our productivity and revolutionize how we field queries to computers. Altogether, this makes vector databases one of the most important emergent technologies of the coming decade. Rick Hao is partner at Speedinvest. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,209
2,023
"How generative AI is transforming enterprise search solutions | VentureBeat"
"https://venturebeat.com/ai/how-generative-ai-is-transforming-enterprise-search-solutions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Spotlight How generative AI is transforming enterprise search solutions Share on Facebook Share on X Share on LinkedIn Presented by Glean Generative AI can unlock the full potential of data in enterprise environments for the employees who rely on it. In this VB Spotlight event, learn how generative AI has transformed enterprise search, improving productivity, building better business outcomes, and more. Watch free on-demand! Enterprise search is a growing pain point. The explosion of SaaS tools over the past decade has brought a sophisticated array of solutions that have changed how work is done – but has also brought along data fragmentation. Employees are working in multiple, disparate applications, creating content in one, communicating about it in several others, looking for background information in yet another, and so on. No one is clear where documents live, where information can be dug up, whether it lives in someone’s head or is hidden somewhere on the network. “The pain points come from not knowing how to navigate this sprawl of software and information, and the mental overhead required to remember these things cuts across functions,” says Eddie Zhou, founding engineer, intelligence, at Glean. “It makes onboarding new hires a massive pain point as well, especially in a hybrid or remote environment — it’s hard to know what you should be looking at and where you should be looking for it.” Generative AI, which has been blowing up the headlines, has made it possible for users to interface with an enterprise work assistant in a much more natural way, taking a large chunk of cognitive overhead away. It makes enterprise search feel like the web searches they’re used to, lowering the barrier to knowledge. The evolution of enterprise search solution Companies have been trying to tackle the challenge of enterprise search for decades, mostly with custom internal tools, but the technology to create a comprehensive solution hasn’t existed before now. A general standardization of tools across organizations – most companies use the Microsoft suite, for example, or Jira, etc. – was a step toward scalability. Artificial intelligence was another step forward, but the main challenge in enterprise search is sparsity: a much smaller set of documents from which to train a model. “The advent of large foundation language models in 2018 made it possible to bring knowledge from the web and from larger sets of data to make enterprise search, which operates on a much smaller set of data, work closer to the way that people have come to expect,” Zhou says. Building the kind of system that learns and works out of the gate was a key turning point, and many enterprise search players today are operating on that model, building manual heuristic systems that need to be plugged in and hand-tuned. And now, generative AI is a leap forward, bringing a new kind of intelligence to a plug-and-play search engine. How generative AI transforms enterprise search Conversational AI essentially peaked in 2016 and 2017 and then seemed to peter out, because many promises were made about the potential of the technology, before the technology was actually sophisticated enough to keep them. Today, with ChatGPT going mainstream, the technology is significantly more advanced, and the vision of a conversational agent in the work setting is a much more real possibility, Zhou says. It’s about giving people access to information they need in a way that feels intuitive. And it can bring users the information they need, when they need it, comprehensively searching apps across the company, understanding context, language, behavior and relationships to find personalized answers. It can surface knowledge and even connect users to the people who can help answer questions or accomplish tasks. A solution like Glean connects to all of an organization’s data sources, crawling the content and indexing all metadata that exists for those data sources, such as links between documents and messages, authors, access permissions, activity surrounding content, by whom, from where and when. For instance, while Slack search is useful to surface an old message, that search can’t follow a link to a Google Drive and index any of the information in those documents. Being able to connect to everything that a given company might have knowledge in, makes the search engine’s knowledge complete. Leveraging data from multiple sources means the engine is always learning and makes the search stack better. “That completeness really is necessary to deliver a search experience that works,” says Zhou. “When a given employee comes to their keyboard, in their mind they have a mental model of all the ways their data is connected. The system that they’re working with also needs to have that.” Securing data with a trusted knowledge model The conversation around trust and ethics in generative AI is crucial, Zhou adds, and the trusted knowledge model is fundamental to delivering a generative experience in the enterprise. It’s built into how the platform indexes information. For each data source it connects to and each document it crawls, it also natively crawls its layers of permissions. This unified view of who a user is across data sources means a search will only turn up the documents and information they have access to. Referenceability, or transparency into where the generative model found that information, means a user can trust the answers they receive. “For us, the foundation of the trusted knowledge model is permissions and data governance, and it’s fundamental to delivering a good generative experience,” he says. “Building on top of permissioned search also lets us ensure that we are providing relevant information, because we’ve understood who a user is, understood the language of a given company, plus the relationships between information and those people. Ultimately we are able to deliver a better end-to-end experience.” To learn more about how generative AI unlocks the full potential of enterprise data, a closer look at the trusted knowledge model for generative AI, and more, don’t miss this VB Spotlight. Register to watch free on-demand! Agenda Understanding the present and the future of AI in enterprise search Unlocking the full potential of data in enterprise environments with generative AI Recognizing the importance of a trusted knowledge model for generative AI Facilitating information access and discovery to improve employee productivity Creating more intelligent, personalized, and effective experiences Presenters Phu Nguyen , Head of Digital Workplace, Pure Storage Jean-Claude Monney , Digital Workplace, Technology and Knowledge Management Advisor Eddie Zhou , Founding Engineer, Intelligence, Glean Art Cole , Moderator, VentureBeat The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,210
2,023
"Google, Microsoft and Meta each said 'AI' nearly 50 times on earnings calls. Here's why we should all care | VentureBeat"
"https://venturebeat.com/ai/google-microsoft-and-meta-each-said-ai-nearly-50-times-on-earnings-calls-heres-why-we-should-all-care"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google, Microsoft and Meta each said ‘AI’ nearly 50 times on earnings calls. Here’s why we should all care Share on Facebook Share on X Share on LinkedIn Image composite by Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Tech giants Alphabet, Microsoft and Meta all reported robust revenue growth in their first-quarter earnings calls this week, highlighting their ambitions and investments in artificial intelligence. The term “AI” was repeated dozens of times by the executives and analysts on the calls, reflecting the industry’s belief that AI is the key to innovation and competitive advantage. Alphabet’s call mentioned AI 50 times , followed by Meta with 49 times and Microsoft with 46 times. The number of references to AI by these tech titans is just the latest signal that investors are clamoring for opportunity to invest in generative AI technology, which has captivated Silicon Valley in recent months. It also signals that Alphabet, Microsoft and Meta are now being viewed as bellwethers for the entire AI industry, which is big business: According to PwC, the AI market is predicted to contribute $15.7 trillion to the global economy by 2030. Growing influence in fast-growing market Big Tech companies such as Google, Microsoft and Facebook control many of the most important developments in artificial intelligence, given the sheer cost of compute and scale required to develop large language models (LLMs). A recent report warned that “ industrial capture ” is looming over the AI landscape — that is, “a handful of individuals and corporations now control much of the resources and knowledge in the sector, and will ultimately shape its impact on our collective future.” The tech giants have enormous influence over billions of users who rely on their tools and platforms for many aspects of their personal and professional lives. Their views and actions on AI have implications for policymakers, regulators and society at large. As they go, so do we — as well as policymakers, regulators and society at large. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To date, these companies have been applying AI to improve their existing products and services, such as search engines, social networks, cloud computing and digital assistants, as well as to create new ones, such as self-driving cars, virtual reality and new healthcare technologies. Microsoft is a big winner thanks to OpenAI partnership Microsoft, which placed a huge bet by investing billions in OpenAI, the developer of ChatGPT, emerged as a big winner, with $52.9 billion in revenue, an increase of 7%. The results pushed Microsoft’s stock higher and sent a signal that its wager on OpenAI — which began as an under -the-radar $1 billion investment in 2019 — is showing signs of paying off. Over the past three months, Microsoft has been busy embedding generative AI across its product portfolio. There was the splashy announcement in February of the new OpenAI-powered Bing, as well as the mid-March debut of the generative AI-powered Copilot 365 to “change work as we know it.” “The world’s most advanced AI models are coming together with the world’s most universal user interface — natural language — to create a new era of computing,” said Satya Nadella, chairman and chief executive officer of Microsoft, on Tuesday’s earnings call. “Across the Microsoft Cloud, we are the platform of choice to help customers get the most value out of their digital spend and innovate for this next generation of AI.” Google fell behind in AI but focused on long-term promise Meanwhile, Google parent Alphabet reported a 3% rise in revenue and earnings for the first quarter, which topped estimates. However, there’s no doubt that Google has had to face up to falling behind in AI — even though it was the research leader for the past decade, developing the Transformers architecture that today’s LLMs are based on. In December, Google management reportedly issued a “Code Red” after the release of ChatGPT, but famously flopped in announcing their Bard chatbot in March. But in its earnings call Sundar Pichai, CEO of Alphabet and Google, talked progress. “Our investments and breakthroughs in AI over the last decade have positioned us well,” he said, highlighting steps made in developing state-of-the-art large language models, empowering developers, creators and partners with AI tools and enabling organizations of all sizes to use and benefit from Google’s AI advances. “We have made good progress across all three areas,” he said. Meta pushed AI ambitions, sending stock higher Finally, Meta also released better-than-expected earnings that sent the company’s stock higher — a bright spot as the company has made steps to focus on efficiency (which led to massive layoffs). Mark Zuckerberg, Meta founder and CEO, used the numbers as a jump-off point to tout Meta’s AI efforts and ambitions. “A key theme I want to discuss today is AI,” he said near the beginning of the earnings call. “Our AI work comes in two main areas: first, the massive recommendations and ranking infrastructure that powers all of our main products — from feeds to Reels to our ads system to our integrity systems, and that we’ve been working on for many, many years — and second, the new generative foundation models that are enabling entirely new classes of products and experiences.” The AI battle is just beginning, but Big Tech power looms large We will be watching Google, Microsoft and Meta — as well as Amazon and Apple — battle for position in the AI space. But the issues related to industrial capture will be playing out as well, including the fact that training a state-of-the-art LLM can require hundreds of millions of dollars. ChatGPT, for instance, reportedly required more than 10,000 GPUs to train, and demands even more resources to continuously operate. Even if these companies look for new directions beyond the GPU and ever-larger LLMs, the fact remains that they are at the center of the generative AI explosion. So we all need to pay attention to what is coming down the Big Tech pike — because in many ways, they control our AI future. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,211
2,023
"EU lawmakers pass draft of AI Act, includes copyright rules for generative AI | VentureBeat"
"https://venturebeat.com/ai/eu-lawmakers-pass-draft-of-ai-act-includes-last-minute-change-on-generative-ai-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages EU lawmakers pass draft of AI Act, includes copyright rules for generative AI Share on Facebook Share on X Share on LinkedIn Image by Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. After months of negotiations and two years after draft rules were proposed, EU lawmakers have reached an agreement and passed a draft of the Artificial Intelligence (AI) Act, which would be the first set of comprehensive laws related to AI regulation. The next stage is called the trilogue, when EU lawmakers and member states will negotiate the final details of the bill. According to a report , the members of the European Parliament (MEPs) confirmed previous proposals to put stricter obligations on foundation models, a subcategory of “General Purpose AI” that includes tools such as ChatGPT. Under the proposals, companies that make generative AI tools such as ChatGPT would have to disclose if they have used copyrighted material in their systems. The report cited one significant last-minute change in the draft of the AI Act related to generative AI models, which “would have to be designed and developed in accordance with EU law and fundamental rights, including freedom of expression.” “The AI Act offers EU lawmakers an opportunity to put an end to the use of discriminatory and rights-violating artificial intelligence (AI) systems,” said Mher Hakobyan, advocacy advisor on AI regulation at Amnesty International, in a blog post. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The governmental AI regulation many have been waiting for While a variety of state-based AI-related bills have been passed in the U.S., it is larger government regulation — in the form of the EU AI Act — that many in the AI and the legal community have been waiting for. Back in December, Avi Gesser, partner at Debevoise and Plimpton and cochair of the firm’s cybersecurity, privacy and artificial intelligence practice group, told VentureBeat that the AI Act is attempting to put together a risk-based regime to address the highest-risk outcomes of AI — while striking a balance so the laws do not clamp down on innovation. “It’s about recognizing that there are going to be some low-risk use cases that don’t require a heavy burden of regulation,” he said. As with the privacy-focused GDPR , he explained, the EU AI Act will be an example of a comprehensive European law coming into effect and slowly trickling into various state- and sector-specific laws in the U.S. Yesterday, the National Law Review wrote, “The AI Act will have a global impact, as it will apply to organizations providing or using AI systems in the EU; and providers or users of AI systems located in a third country (including the UK and US), if the output produced by those AI systems is used in the EU.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,212
2,023
"Developers embrace AI Tools but face 'Big Code' challenges, survey finds | VentureBeat"
"https://venturebeat.com/ai/developers-embrace-ai-tools-but-face-big-code-challenges-survey-finds"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Developers embrace AI Tools but face ‘Big Code’ challenges, survey finds Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence is both the best and worst thing to happen to developers, engineering leaders and companies as a whole, according to a new report from Sourcegraph , a code-intelligence platform that helps developers navigate and understand large and complex codebases. The report is based on a survey of more than 500 software developers and engineers across various industries and regions. It reveals, among other surprising findings, that 95% of developers surveyed are already using AI tools to write code, such as GitHub Copilot , ChatGPT and Cody , an AI coding assistant launched by Sourcegraph last month. While these tools can boost productivity and creativity, they also pose significant challenges for managing and securing the vast amounts of code being generated and modified every day. This problem is known as “Big Code,” and it has been a headache for years; but the report warns that it will hit crisis mode if companies don’t get a handle on how their developers use AI at work. “Big Code has gotten worse over the last 10 years,” Sourcegraph founder and CEO Quinn Slack said in an interview with VentureBeat. “77% of developers, according to the study, say that their codebase grew five times in the last three years. And as we look into the future, AI is about to make that much worse.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Big Code refers to the situation where codebases are made up of millions (sometimes billions) of lines of code written by thousands of developers over the last two or three decades. The consequences of Big Code include things like struggling to fix critical vulnerabilities and delayed productivity. The report shows that only 65% of companies have a plan for Big Code and even fewer have any idea of how to approach using AI for software engineering. AI is clearly the best thing to happen to development teams in unlocking a new level of productivity, but if not done right could be the worst in terms of a “toothpaste out of the tube” situation for codebases and subsequent tech debt and security implications, the report says. “If you’re an enterprise developer at one of our customers, and you need to make a change [to your code], that change is incredibly difficult to make because it could have a ripple effect on a hundred or a thousand other systems. Everywhere you look, there’s incredible complexity, because you have thousands of software engineers that are constantly changing the system.” The hidden cost of AI tools in the era of ‘Big Code’ The report includes input from hundreds of Sourcegraph customers. The company provides code intelligence — such as a code search function — for engineering teams at four out of five FAANG companies (Facebook, Amazon, Apple, Netflix and Google), top tech companies like Canva and Uber, four of the top 10 U.S. banks, companies launching satellites into space and the team working on the Large Hadron Collider. The report highlights several concerns developers at these organization have about AI’s impact on Big Code: 61% are concerned about AI’s contribution to tech debt. 67% are worried about code sprawl due to AI’s rapid growth. 76% fear the amount of new code created that will need to be managed. Developers recognize the threat Big Code and AI pose to their companies’ ability to innovate and compete, with 72% seeing it as a real risk. They’ve identified several key areas where they need help: 95% want assistance in quickly getting up to speed on their codebase. 91% want more efficient ways to identify and resolve code issues. 91% would save significant time if their codebase were fully searchable across all sources and repositories. 88% desire tools that would allow them to achieve greater output with fewer resources. Developers currently spend only 20% of their time in the codebase writing new code for core products, with that percentage dropping to 14% when accounting for non-code activities like meetings and documentation. This has led to developer dissatisfaction: 73% of developers experience more frequent blockages due to the size of their codebase. 85% struggle to maintain efficiency in their daily work. 82% wish they could spend less time searching for information or old code and more time coding. In a world where AI is transforming developer tools and productivity, the report serves as a stark reminder that companies must address the challenges posed by Big Code head on. By providing developers with the right support and taking a proactive approach to managing AI-generated code, businesses can unlock the full potential of AI without succumbing to its potential pitfalls. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "