id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
13,967
2,023
"What's inside Box? More generative AI | VentureBeat"
"https://venturebeat.com/ai/whats-inside-box-more-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What’s inside Box? More generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Box got its start a decade ago as a cloud based document sharing technology. The platform has evolved significantly over the years to become an enterprise content cloud, and today is taking another step forward with the launch of Box AI. With its new service, Box is providing a set of generative AI capabilities that will enable organizations to better understand and create new enterprise content. At launch, Box AI will benefit from an integration with OpenAI’s large language model (LLM) services, with the plan to have additional LLM providers over time. This new move is not the first time Box has had AI on its platform. Back in 2017, the company announced a partnership with Google to integrate AI for image recognition. In 2021, Box tapped deep learning to help with security efforts and detect sophisticated malware. “What we’re announcing with Box AI is a broad platform where you’re going to be able to use AI to generally work with content and understand it in new ways,” Aaron Levie, cofounder and CEO of Box, told VentureBeat. “It’s really driven by these LLMs that have so much more breath to them in terms of what kind of problems we can solve.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How ChatGPT inspired Box (and many others) Box AI is just the latest in a string of enterprise software vendors that have added some form of generative AI to their platforms. Microsoft has been steadily adding what it refers to as ‘copilot’ capabilities to its enterprise software services to enable users with AI powered functionality. Salesforce has similarly expanded its enterprise software-as-a-service (SaaS) platforms with generative AI under the banner Einstein GPT. For Microsoft, Salesforce and now Box, a key driver for integrating generative AI is the runaway popularity of OpenAI’s ChatGPT. Levie said that his team was playing with ChatGPT when it first came out at the end of 2022, and within a day realized the huge impact and potential it had for IT generally — and for Box’s users, too. Box’s executive management saw that they could use the power of generative AI to gain insights from any type of document, which could be a real benefit to the company’s enterprise users, Levie explained. “We realized there were so many use cases where we could just make work more efficient and, in many cases more delightful, and really get the entire power of LLMs brought to enterprise content,” said Levie. What’s inside Box AI Box AI is not training a new AI model — rather it is bringing the power of existing LLMs to data that an enterprise has on the Box platform. Instead of a generic search to look for specific keywords inside of a document, Box AI will now allow users to ask questions about what’s in a document. Levie said this allows enterprise employees to interact with their content as a source of knowledge that AI can learn from, provide analysis and helping organizations to be more productive. For example, Levie said that a user could be looking at a budget document and ask the AI to help brainstorm ways to improve processes or save money. Another example could be a user looking at a legal contract and asking what the riskiest clauses are in the contract? “Being able to work with your data and ask new kinds of questions on top of your content — that’s what Box AI is,” said Levie. Generating new content The platform also includes the ability to generate new content with Box Notes, which is Box’s online document editor and collaboration workspace. Levie said that now with Box Notes, an enterprise will be able to use Box AI to write a meeting agenda, draft a blog post and summarize information or any other type of content. “You can imagine that Box AI will be plugged into a variety of the components of our platform,” said Levie. “It’s really about turning our platform into an intelligent content cloud that helps you unlock the value of your content.” Overall, Levie expects to see the power of generative AI and intelligence infused across all enterprise software vendor offerings in the future as simple table stakes. “When you have a platform shift like this, it becomes a fundamental requirement from your customers and you have to participate so it really turns into table stakes,” said Levie. “I don’t think there’s an enterprise on the planet that would accept that their enterprise software in five years from now is just not intelligent.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,968
2,023
"Informatica bets big on data privacy with Privitar acquisition | VentureBeat"
"https://venturebeat.com/data-infrastructure/informatica-bets-big-on-data-privacy-with-privitar-acquisition"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Informatica bets big on data privacy with Privitar acquisition Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Informatica , a provider of end-to-end data management solutions for enterprises, today announced the intent to acquire Privitar , a London-based startup helping companies embed privacy protection into their data efforts. The financial terms of the transaction, which is expected to close by the third quarter of 2023, were not disclosed. The deal will strengthen the privacy layer of Informatica’s tech stack, particularly the company’s AI-powered Intelligent Data Management Cloud (IDMC) platform, enabling enterprises to democratize the use of data across departments while adhering to regulations and ethical data principles at the same time. “Data governance and responsible use of data is a growing priority for large businesses, but too often requires trading off agility and self-service. With Privitar’s data access management and privacy capabilities integrated into IDMC, customers can deliver best-in-class data governance, access, policies, and compliance empowering better data-driven decision-making and business outcomes,” Amit Walia, CEO at Informatica, said in a statement. What capabilities does Privitar bring to the table? Founded in 2014 and last valued at $400 million, Privitar has been all about helping companies protect data while enabling its use for use cases like AI and analytics. The company offers tools to build collaborative workflows and policy-based data privacy and access controls into data operations. For instance, it can be used to embed invisible watermarks to track unauthorized distribution of data or automatically de-identify the information, taking into account who’s accessing it. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With this deal, all these capabilities from Privitar will be coming into Informatica’s Intelligent Data Management Cloud to support critical, high-growth use cases around cloud analytics, governance, data mesh and data marketplace. The platform will combine the tooling with its Claire AI engine to automate the application of policy-based privacy and access controls for end customers. “Joining Informatica will enable us to better serve our customers with an integrated data management stack delivering a complete data governance solution with security and privacy intrinsic to the platform. As part of Informatica, we can accelerate our innovation, enhance our capabilities and expand our reach,” Jason du Preez, CEO at Privitar, said. Notably, this will be another major update for IDMC, following the introduction of Claire GPT to help enterprise users consume, process, manage and analyze data through plain natural language prompts. Growing consolidation in the data space The deal is another example of growing consolidation in the data space. Similar signs were seen when engineering house dbt Labs agreed to acquire Transform, which has sought to create a semantic data layer to better integrate the modern data stack, and Thoma Bravo-owned Qlik announced its intent to join efforts with Talend, another Thoma Bravo-owned entity. “It makes a ton of sense for the Snowflakes and the Databricks of the world to be very acquisitive. Whether we see really big acquisitions right now or whether they come towards the latter half of this year or the next year is a point of question. I’d probably bet more on the latter half of this year and early part of next year,” Sean Knapp, founder and CEO of Ascend. io , which automates data and analytics engineering workloads, told VentureBeat in February. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,969
2,023
"Business leaders investing in generative AI, automation to reinvent physical operations: Report | VentureBeat"
"https://venturebeat.com/ai/business-leaders-investing-generative-ai-automation-reinvent-physical-operations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Business leaders investing in generative AI, automation to reinvent physical operations: Report Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Connected operations cloud firm Samsara today released a report that highlights how organizations in industries that drive over 40% of the global GDP are revamping their physical operations. The 2023 State of Connected Operations Report , compiled after surveying more than 1,500 physical operations leaders from nine countries, reveals that these leaders are making substantial investments in digitization to enhance their supply chains, improve employee skills and adopt sustainable practices, all of which have yielded positive results. According to the report, challenges faced by operations leaders in the past year garnered significant attention and sparked discussions in boardrooms worldwide. These challenges included soaring fuel costs and inflation, shortages of labor and equipment, and constraints in supply chains. However, they successfully navigated these obstacles by embracing technology and finding ways to optimize efficiency. Automation and generative AI in the pipeline Leaders are revising their supply chains and technology budgets to address these challenges and build resilience. The study also found that leaders are now embracing generative artificial intelligence (AI) and automation, with 84% planning to use generative AI and 91% automation to modernize their operations by 2024. Additionally, 51% are already using or planning to use autonomous vehicles or equipment this year. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To optimize operations further, leaders are replacing traditional pen-and-paper processes with digital workflows, with 55% of their field employees predicted to depend on digital workflows to carry out their daily tasks by 2025. “We asked more than 1,500 physical operations leaders across nine countries how they are reinventing their operations. They have over 3.6 million vehicles and assets under management and over 6.3 million employees,” Jeff Hausman, chief product officer at Samsara, told VentureBeat. “Our research found that two in three leaders are increasing their technology budgets this year, indicating they are confident in the ROI of digital transformation. Leaders report benefits like increased net profit and safety as a result of their investments to date.” To enhance supply chain predictability and efficiency, 59% of leaders have planned to onshore their operations, meaning they will relocate them to their country of origin this year. According to the study, real-time operations data held a competitive edge and was deemed crucial for decision-making by 90% of leaders. “Leaders predicted that by 2025, more than half of employees in the field will rely on digital workflows to perform day-to-day tasks. For employees in physical operations, digitizing workflows reduces friction in administrative aspects and adds more flexibility. It’s a significant change from pen-and-paper processes and speaks to the evolution of technology purpose-built for these roles,” Hausman told VentureBeat. “Labor shortages continue to be a major challenge for operations leaders. Consider, in the U.S., the average driver turnover rate is about 90%. At the same time, digitization is shifting the day-to-day employee experience. So we’re at an inflection point: roles are shifting, and the talent pool is tight.” Independent research firm Lawless Research conducted the 2023 State of Connected Operations survey from February 6 to March 10, 2023. The audience surveyed comprised 1,525 physical operations leaders, including C-suite executives. Investing in next-gen technologies to optimize efficiency “Our research found significant changes are underway in the next 18 months. Leaders are investing heavily in digitization to improve supply chains, employee skills and sustainability practices. These investments are all connected to combatting today’s toughest challenges,” Samsara’s Hausman told VentureBeat. Leaders anticipate that within the next two years, one out of every six employees will be engaged in roles that don’t exist today. To tackle this upcoming shift, over half (52%) of the leaders have prioritized equipping their employees with the skills they will need to navigate emerging technologies. Notably, the surveyed organizations are projected to invest approximately $7 billion in 2023 to facilitate employee training, upskilling and reskilling initiatives. “Optimizing the existing workforce and investing in career development is critical to setting an organization up for long-term success. In fact, a key to retention is proving to employees they are essential to the future of the business and providing opportunities for them to build their careers,” explained Hausman. “Our research also found leaders are uncovering new ways to upskill employees with technology. For example, over half will use extended reality and AI to upskill employees in the next two years.” Samsara’s research also revealed that data plays a fundamental role in every digital transformation strategy, serving as a robust foundation for fostering resilience and gaining a competitive edge. Technology empowers organizations with expedited access to data, and leaders who possess accurate and timely insights are better equipped to anticipate and proactively address potential issues, thereby ensuring seamless operation. “Almost every leader we surveyed (90%) said having accurate, real-time operation data is critical to their decision-making,” Dana Chery, VP of marketing at Samsara, told VentureBeat. “They’re dedicating substantial resources to ensure they have the technology in place to leverage that data to its fullest, with two-thirds of leaders reporting that they are increasing their technology budgets for 2023.” Digital transformation for physical operations Chery added that managing physical operations is complex, and organizations have historically struggled to collect and analyze data to make informed decisions. However, recent technological advancements, such as plug-and-play digital sensors, wireless technology and cloud-based AI data processing, have enabled a significant digital transformation in the past decade. “From driver safety to back office operations and customer service, it’s difficult to find a role where technology can’t support improved outcomes. It’s a new era for these industries,” she said. “Our research found that physical operations leaders are excited to test technologies like generative AI to see their potential — only 5% said they had no plans to adopt it. This demonstrates the universal need for technology to support the employee experience and increase efficiency across the board.” The report also highlighted that connected operations leaders, who possess the highest level of digital maturity, demonstrated a six-fold greater likelihood of surpassing their financial goals by 25% or more. The study found that these leaders are making substantial investments to fortify their organizations and enhance customer experiences. Many anticipate positive transformations and a favorable return on investment within the next 12-18 months. Additionally, Improving workforce productivity with new technologies is a critical priority for 56% of those surveyed. “We took a closer look at the differences between organizations that reported the highest level of digital maturity — Connected Operations Leaders — to those at the beginning stages of digitization,” said Chery. “Compared to organizations in the beginning stages of digitization, Connected Operations Leaders are five times more likely to rate the productivity of their workforce as ‘excellent’ and six times more likely to report exceeding their financial goals by 25% or more. The bottom-line benefits of digitization are clear.” Sustainability initiatives for a better future Chery highlighted that even modest enhancements in operations, facilitated by digitization, can yield significant impacts for organizations — in sustainability, for instance. Samsara’s research discovered a growing trend of adopting electric and hybrid vehicles, with half of the leaders intending to acquire or lease electric vehicles this year to mitigate their emissions. The more leaders invest in decarbonizing transportation, the quicker global emissions can be reduced. “Decarbonizing transportation is a high priority, and one way organizations are reducing emissions is by adopting electric vehicles. Leaders predict over half of their organizations’ fleet vehicles will be electric or hybrid by 2025,” she said. Chery explained that these opportunities arise from market demands, macroeconomic shifts and the increasing role of technology within organizations. A notable example is the rapid transformation brought about by the emergence of ESG (Environmental, Social and Governance) efforts. She noted that roles like ESG officer were less prevalent just a few years ago. And we are currently witnessing the emergence of even more specialized positions, such as fleet sustainability manager. “Their investments are leading to the invention of new revenue streams, such as pay-per-use or subscription charging stations, or by selling energy back to the grid,” Chery added. “These are just a couple of examples of how leaders are rethinking their sustainability initiatives to not only meet their sustainability goals but to drive bottom-line results.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,970
2,023
"Skyflow launches ‘privacy vault’ for building LLMs | VentureBeat"
"https://venturebeat.com/ai/skyflow-launches-privacy-vault-for-building-llms"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Skyflow launches ‘privacy vault’ for building LLMs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Palo Alto, California-based Skyflow , a company that makes it easier for developers to embed data privacy into their applications, today announced the launch of a “privacy vault” for large language models. The solution, as the name suggests, provides enterprises with a layer of data privacy and security throughout the entire lifecycle of their LLMs, beginning with data collection and continuing through model training and deployment. It comes as enterprises across sectors continue to race to embed LLMs, like the GPT series of models, into their workflows to simplify processes and boost productivity. Why a privacy vault for GPT models? LLMs are all the rage today, helping with things like text generation, image generation and summarization. However, most of the models that are out there have been trained on publicly available data. This makes them suitable for broader public use, but not so much for the enterprise side of things. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To make LLMs work in specific enterprise settings, companies need to train them on their internal knowledge. A few have already done it or are in the process of doing it , but the task is not easy, as you have to ensure that the internal, business-critical data used for training the model is protected at all stages of the process. This is exactly where Skyflow’s GPT privacy vault comes in. Delivered via API , the solution establishes a secure environment, allowing users to define their sensitive data dictionary and have that information protected at all stages of the model lifecycle: data collection, preparation, model training, interaction and deployment. Once fully integrated, the vault uses the dictionary and automatically redacts or tokenizes the chosen information as it flows through GPT — without lessening the value of the output in any way. “Skyflow’s proprietary polymorphic encryption technique enables the model to seamlessly handle protected data as if it were plaintext,” Anshu Sharma, Skyflow cofounder and CEO, told VentureBeat. “It will protect all sensitive data flowing into GPT models and only reveal sensitive information to authorized parties once it has been processed by the model and returned.” For example, Sharma explained, plaintext sensitive data elements like email addresses and social security numbers are swapped with Skyflow-managed tokens before inputs are provided to GPTs. This information is protected by multiple layers of encryption and fine-grained access control throughout model training, and ultimately de-tokenized after the GPT model returns its output. As a result, authorized end users get a seamless output experience, with plaintext-sensitive data bypassing the GPT model. “This works because GPT LLMs already break down inputs to analyze patterns and relationships between them and then make predictions about what comes next in the sequence. So, tokenizing or redacting sensitive data with Skyflow before inputs are provided to the LLM doesn’t impact the quality of GPT LLM output — the patterns and relationships remain the same as before plaintext sensitive data is tokenized by Skyflow,” Sharma added. The offering can be integrated into an enterprise’s existing data infrastructure. It also supports multi-party training, where two or more entities could share anonymized datasets and train models to unlock insights. Multiple use cases While the Skyflow CEO didn’t share how many companies are using the GPT privacy vault, he did note that the offering, which is an extension of the company’s existing privacy-focused solutions, is helping protect sensitive clinical trial data in the drug development cycle as well as customer data used by travel platforms for improving customer experiences. IBM too is a customer of Skyflow and has been using the company’s products to de-identify sensitive information in large datasets before analyzing it via AI/ML. Notably, there are also alternative approaches to the problem of privacy, such as creating a private cloud environment for running individual models or a private instance of ChatGPT. But those could prove to be far more expensive than Skyflow’s solution. Currently, in the data privacy and encryption space, the company competes with players like Immuta , Securiti , Vaultree , Privitar and Basis Theory. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,971
2,023
"Why everyone is talking about generative AI, not just the experts | VentureBeat"
"https://venturebeat.com/ai/why-everyone-is-talking-about-generative-ai-not-just-the-experts"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why everyone is talking about generative AI, not just the experts Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Improvements over the last decade in machines’ ability to generate images and text have been staggering. As is often the case with innovation, progress is not linear, but comes in leaps and bounds, which surprises and delights researchers and users alike. 2022 was a banner year for innovation in generative AI , built on the advent of diffusion methods for image generation and of increasingly large-scale transformers for text generation. And while it provided a major leap forward for the entire natural language processing (NLP) industry, there are three reasons why generative AI models were the first to stir the public’s excitement, and why they’ll still be the main points of entry into what language AI can do for the time being. What’s behind the generative AI excitement? The most obvious reason is that they fall into a very intuitive class of AI systems. These models aren’t used to create a high dimensional vector or some uninterpretable code, but rather natural-looking images, or fluent and coherent text — something that anyone can see and understand. People outside of machine learning do not need specific expertise to judge how natural or fluent the system is, which makes this part of AI research seem much more approachable than other (perhaps equally important) areas. Second, there is a direct connection between generation and how we evaluate intelligence: When examining students in school, we value the ability to generate answers over the ability to discriminate answers by selecting the right answer. We believe that having students explain things in their own words helps show a better grasp of the topic — ruling out the chance that they’ve simply guessed the right answer or memorized it. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! So when artificial systems produce natural images or coherent prose, we feel compelled to compare that to similar knowledge or understanding in humans, although whether this is overly generous to the actual abilities of artificial systems is an open question in the research community. What is clear from a technical perspective is that the ability of models to produce novel but plausible images and text shows that rich internal representations of the underlying domain (e.g., the task at hand, the sort of things the images or text are “about”) are contained in these models. Furthermore, these representations are useful across a wider range of domains than just generation for generation’s sake. In short, while generative models were the first models to grasp the public’s attention, there will be many more valuable use cases to come. One thing from another Third, the latest generative models show an ability to conditionally generate. Instead of sampling existing images or snippets of text, they have the ability to create text, video, images or other modalities which are conditioned on something else — like partial text or imagery. To see why this is important, one needs to look no further than most human activities, which involve generating something depending on something else. To give some examples: Writing an essay is generating text conditioned on a question/topic and the knowledge and views contained in our own experience and in books, papers and other documents. Having a conversation is generating responses conditioned on our knowledge of the world, our understanding of the pragmatics the situation calls for, and what has been said up to that point in the conversation. Drawing architectural plans is generating an image based on our knowledge of architectural and structural engineering principles, sketches or pictures of the terrain and its topology/surroundings, and the (often underspecified) requirements provided by the client. Most intelligent behavior follows this pattern of producing something based on other things as context. The fact that artificial systems now have this ability means we’ll likely see more automation in our work, or at least a more symbiotic relationship between humans and computers to get things done. We can see this already in new tools to help humans code, like CodeWhisperer , or help write marketing copy, like Jasper. Today, we have systems that can create text, images or videos based on other information we feed to it. That means we can apply these generations to similar problems and processes for which we once needed human experts. This will lead to additional automation, or for more symbiotic forms of support between humans and artificial systems, which has both practical and economic consequences. The new foundational tools For the rest of 2023, the big question will be what all this progress really means in terms of potential applications and utility. It is an exceedingly exciting time to be in the industry because we are looking to do nothing less than build foundational tools for building intelligent systems and processes, making them as intuitive and applicable as possible, and putting them into the hands of the broadest class of developers, builders and innovators possible. It’s something that drives my team and fuels our mission to help computers better communicate with us and use language to do so. While there is more to human intelligence than the processes this technology will enable, I have little doubt that — paired with the boundless ability humans have to constantly innovate on the backs of new tools and technology — the innovation we’ll see in 2023 will change the way we use computers in disruptive and wonderful ways. Ed Grefenstette is head of machine learning at Cohere. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,972
2,023
"Source code must become a C-level priority | VentureBeat"
"https://venturebeat.com/programming-development/source-code-must-become-a-c-level-priority"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Source code must become a C-level priority Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. We’ve all heard Marc Andreessen’s famous proclamation in 2011 that “software is eating the world.” It was a prescient statement: Today, modern, digital-driven enterprises provide all sorts of software-based products and services, while also relying heavily on software to manage their internal operations. Even organizations known for selling hardware, such as electronics companies and automakers, are increasingly offering subscription-based software services to grow revenues. Organizations have long realized how important their software is to their business. But they’re now fully realizing just how critical their software’s source code is. Source code is the most critical asset. It contains all the business logic and dictates how the software will behave and how it will perform. It’s the source code that is eating the world. Source code is the foundation of every modern enterprise. The C-suite needs to take ownership of the code and make it a priority on par with things like sales, marketing, security, finance and HR. To strengthen this critical strategic asset and maximize their business results, organizations must focus on code at the highest level. The source-code problem This transition will address a major problem that has gone unchecked for years: code ownership. Someone has to be responsible for stewarding source code and software. Today, there is no one who really owns source code. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Developers don’t feel like they own code because most software contains lots of legacy code that they didn’t write. Instead, they only feel like they have ownership over the new code that they’re writing. This hurts overall code quality. Bad legacy code is often ignored and allowed to fester, leading to worse software performance and potential vulnerabilities. We’re seeing more chief development officers (CDOs) emerge, but they’re mostly responsible for owning the software development process and ensuring best practices are followed, not owning the code itself. CDOs and VPs of engineering ultimately focus on process and efficiency, not on code ownership. Owning code at the C-level Enterprises that prioritize code will ensure that there is someone at the highest level of an organization who is in charge of code and accountable for its success or failure. Today, it’s unthinkable that any major company could exist without an executive dedicated to managing security or someone in charge of managing finances. As the C-level begins to make code a priority, every modern, software-driven organization will have a leader dedicated to owning code. In some cases, this may take the form of chief coding officers (CCOs). Code ownership will help eliminate technical debt. Any organization that’s large enough to have 200–300 developers will likely have a tremendous amount of technical debt resulting from flawed legacy code. With someone specifically in charge of code, organizations can dedicate efforts to systematically clean code, fixing mistakes and minimizing their technical debt. In turn, this will free developers to focus on new projects and drive real business value. These leaders will also spearhead efforts to preemptively correct coding errors before they cause major problems for the software (and the business), resulting in even greater developer productivity and overall efficiency. Almost every major enterprise, no matter its industry, relies heavily on software to deliver services, manage operations internally or promote itself. Without clean code, the performance of this software will suffer, negatively impacting the business. As more organizations continue to recognize that source code is the central component of software, they will begin to prioritize it at the boardroom level and will ensure that they have someone, perhaps a CCO, who is solely responsible for the success of their code. Olivier Gaudin is CEO and cofounder of Sonar DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,973
2,023
"Forget the hybrid cloud; it's time for the confidential cloud  | VentureBeat"
"https://venturebeat.com/security/forget-the-hybrid-cloud-its-time-for-the-confidential-cloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Forget the hybrid cloud; it’s time for the confidential cloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As cloud adoption gains traction , it’s clear that security teams have been left to play catch up. In diverse hybrid cloud and multicloud environments, encrypting data-at-rest and in-transit isn’t enough; it needs to be encrypted in use, too. This is where confidential computing comes in. Today, The Open Confidential Computing Conference ( OC3 ) gathered together IT industry leaders to discuss the development of confidential computing. Hosted by Edgeless Systems , the event welcomed more than 1,200 attendees, technologists and academics. Speakers included Intel CTO Greg Lavender and Microsoft Azure CTO Mark Russinovich. They discussed how the role of confidential computing will evolve as organizations migrate to confidential cloud models. What confidential computing is — and isn’t One of the core panel discussions from the event, led by Russinovich, centered on defining what confidential computing is — and isn’t. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The most succinct definition is the third leg in the data protection triangle of protecting data at rest, protecting data in transit; confidential computing is protecting data in-use,” Russinovich said in an exclusive interview with VentureBeat. “The data is protected while it’s being processed.” More specifically, a vendor using confidential computing will create a secure piece of hardware that stores encryption keys within an encrypted trusted execution environment (TEE). The TEE encrypts data and code while in use so they can’t be modified or accessed by any unauthorized third parties. “Data in use means that, while an application is running, it’s still impossible for a third party — even the owner of the hardware the application is running — from ever seeing the data in the clear,” said Mark Horvath, senior director analyst at Gartner. Encrypting data-in-use, rather than at-rest or in-transit, means that organizations can confidentially and securely process personally identifiable information ( PII ) or financial data with AI , ML and analytics solutions without exposing it in memory on the underlying hardware. It also helps protect organizations from attacks that target code or data in use, such as memory scraping or malware injection attacks of the likes launched against Target and the Ukraine power grid. Introducing the confidential cloud One of the underlying themes at the OC3 event, particularly in a presentation by Lavender, was how the concept of the confidential cloud is moving from niche to mainstream as more organizations experiment with use cases at the network’s edge. “The use cases are expanding rapidly, particularly at the edge, because as people start doing AI and machine learning processing at the edge for all kinds of reasons [such as autonomous vehicles, surveillance infrastructure management], this activity has remained outside of the security perimeter of the cloud,” said Lavender. The traditional cloud security perimeter is based on the idea of encrypting data-at-rest in storage and as it transits across a network, which makes it difficult to conduct tasks like AI inferencing at the network’s edge. This is because there’s no way to prevent information from being exposed during processing. “As the data there becomes more sensitive — particularly video data, which could have PII information like your face or your driver’s [license] or your car license [plate] number — there’s a whole new level of privacy that intersects with confidential computing that needs to be maintained with these machine learning algorithms doing inferencing,” said Lavender. In contrast, adopting a confidential cloud approach enables organizations to run workloads in a TEE, securely processing and inferencing data across the cloud and at the network’s edge, without leaving PII, financial data or biometric information exposed to unauthorized users and compliance risk. This is a capability that early adopters are aiming to exploit. After all, in modern cloud environments, data isn’t just stored and processed in a ring-fenced on-premise network with a handful of servers, but in remote and edge locations with a range of mobile and IoT devices. The next-level: Multi-party computation Organizations that embrace confidential computing unlock many more opportunities for processing data in the cloud. For Russinovich, some of the most exciting use cases are multi-party computation scenarios. These are scenarios “where multiple parties can bring their data and share it, not with each other, but with code that they all trust, and get shared insights out of that combination of data sets with nobody else having access to the data,” said Russinovich. Under this approach, multiple organizations can share data sets to process with a central AI model without exposing the data to each other. One example of this is Accenture’s confidential computing pilot developed last year. This used Intel’s Project Amber solution to enable multiple healthcare institutions and hospitals to share data with a central AI model to develop new insights on how to detect and prevent diseases. In this particular pilot, each hospital trained its own AI model before sending information downstream to be aggregated within a centralized enclave, where a more sophisticated AI model processed the data in more detail without exposing it to unauthorized third parties or violating regulations like ( HIPAA ). It’s worth noting that in this example, confidential computing is differentiated from federated learning because it provides attestation that the data and code inside the TEE is unmodified, which enables each hospital to trust the integrity and legitimacy of the AI model before handing over regulated information. The state of confidential computing adoption in 2023 While interest in confidential computing is growing as more practical use cases emerge, the market remains in its infancy, with Absolute Reports estimating it at a value of $3.2 billion in 2021. However, for OC3 moderator Felix Schuster, CEO and founder of Edgeless Systems, confidential computing is rapidly “deepening adoption.” “Everything is primed for it,” said Schuster. He pointed out that Greg Lavender recently spoke in front of 30 Fortune 500 CISOs, of which only two had heard of confidential computing. After his presentation, 20 people followed up to learn more. “This unawareness is a paradox, as the tech is widely available and amazing things can be done with it,” said Schuster. “There is consensus between the tech leaders attending the event that all of the cloud will inevitably become confidential in the next few years.” Broader adoption will come as more organizations begin to understand the role it plays in securing decentralized cloud environments. Considering that members of the Confidential Computing Consortium include Arm, Facebook, Google, Nvidia, Huawei, Intel, Microsoft, Red Hat, EMD, Cisco and VMware, the solution category is well-poised to grow significantly over the next few years. Why regulated industries are adopting confidential computing So far, confidential computing adoption has largely been confined to regulated industries, with more than 75% of demand driven by industries including banking, finance , insurance, healthcare , life sciences, public sector and defense. As the Accenture pilot indicates, these organizations are experimenting with confidential computing as a way to reconcile data security with accessibility so that they can generate insights from their data while meeting ever-mounting regulatory requirements. Keeping up with regulatory compliance is one of the core drivers of adoption among these organizations. “The technology is generally seen as a way to simplify compliance reporting for industries such as healthcare and financial services,” said Brent Hollingsworth, director of the AMD EPYC Software Ecosystem. “Instead of dedicating costly efforts to set up and operate a secure data processing environment, organizations can process sensitive data in encrypted memory on public clouds — saving costs on security efforts and data management,” said Hollingsworth. In this sense, confidential computing gives decision makers both peace of mind and assurance that they can process their data while minimizing legal risk. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,974
2,023
"Adobe Stock creators aren't happy with Firefly, the company's 'commercially safe' gen AI tool | VentureBeat"
"https://venturebeat.com/ai/adobe-stock-creators-arent-happy-with-firefly-the-companys-commercially-safe-gen-ai-tool"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Adobe Stock creators aren’t happy with Firefly, the company’s ‘commercially safe’ gen AI tool Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Adobe’s stock soared after a strong earnings report last week — where executives touted the success of its “commercially safe” generative AI image generation platform Adobe Firefly. They say Firefly was trained on hundreds of millions of licensed images in the company’s royalty-free Adobe Stock offering, as well as on “openly licensed content and other public domain content without copyright restrictions.” On the Firefly website , Adobe says it is “committed to developing creative generative AI responsibly, with creators at the center.” “We could not be more excited about our generative AI road map that will make Adobe products more accessible to an even larger universe of people, while dramatically enhancing productivity for existing customers,” said David Wadhwani, president, digital media business at Adobe. But a vocal group of contributors to Adobe Stock, which includes 300 million images, illustrations and other content that trained the Firefly model, say they are not happy. According to some creators, several of whom VentureBeat spoke to on the record, Adobe trained Firefly on their stock images without express notification or consent. While this is certainly an issue for other text-to-image generative tools such as DALL-E 2, Stable Diffusion and Midjourney (which were trained on scrapes of imagery posted to the public web, including copyrighted imagery), it is particularly egregious for a company like Adobe, which has been deeply intertwined with the creative economy for decades, they say. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Now, Adobe Stock creators say Firefly’s popularity is making it far less likely that users will purchase stock images. According to Adobe, since its launch in March, Firefly beta users have generated over 200 million images using a variety of newly available tools and features such as text-to-image, generative fill and extend image. Photoshop users generated over 150 million images in just the first two weeks using the new generative fill feature powered by Firefly. In addition, a flooding of gen AI images into Adobe Stock is cannibalizing the platform, the creators say. According to PetaPixel, Adobe Stock is currently the only major stock website accepting AI image submissions from contributors — including those generated in non-Firefly tools — and AI images are outperforming human-generated files on the site on many metrics (An Adobe spokesperson says “Adobe Stock respects the rights of third parties and requires all Stock contributors to comply with our terms, including those specific to the use of generative AI tools. You can find those terms here. ”) Adobe Stock creators say it is unethical to train Firefly using their IP Dean Samed is a UK-based creator who works in Photoshop image editing and digital art. He told VentureBeat over Zoom that he has been using Adobe products since he was 14 years old, and has contributed over 2,000 images to Adobe Stock. “They’re using our IP to create content that will compete with us in the marketplace,” he said. “Even though they may legally be able to do that, because we all signed the terms of service, I don’t think it is either ethical or fair.” He said he didn’t receive any notice that Adobe was training an AI model. “I don’t recall receiving an email or notification that said things are changing, and that they would be updating the terms of service,” he said. According to Eric Urquhart, a Connecticut-based artist who has a day job as a matte artist in a major animation studio, artists who joined Adobe Stock years ago could never have anticipated the rise of generative AI. “Back then, no one was thinking about AI,” said Urquhart, who joined Adobe Stock in 2012 and has several thousand images on the platform. “You just keep uploading your images and you get your residuals every month and life goes on — then all of a sudden, you find out that they trained their AI on your images and on everybody’s images that they don’t own. And they’re calling it ‘ethical’ AI. ” Adobe Stock creators also say Adobe has not been transparent. “I’m probably not adding anything new because they will probably still try to train their AI off my new stuff,” said Rob Dobi, a Connecticut-based photographer. “But is there a point in removing my old stuff, because [the model] has already been trained? I don’t know. Will my stuff remain in an algorithm if I remove it? I don’t know. Adobe doesn’t answer any questions.” The artists say that even if Adobe did not do anything illegal and this was indeed within their rights, the ethical thing to do would have been to pre-notify their Adobe Stock artists about the Firefly AI training, and offer them an opt-out option right from the beginning. Adobe, in response to the artists’ claims, told VentureBeat by email that its goal is to build generative AI in a way that enables creators to monetize their talents, much as Adobe has done with platforms like Behance. It is important to note, a spokesperson says, that Firefly is still in beta. “During this phase, we are actively engaging the community at large through direct conversations, online platforms like Discord and other channels, to ensure what we are building is informed and driven by the community,” the Adobe spokesperson said, adding that Adobe remains “committed” to compensating creators. As Firefly is in beta, “we will provide more specifics on creator compensation once these offerings are generally available.” Hi there, thanks for your question. Adobe’s use of Stock content is covered by our Stock Contributor license agreement. We are developing a compensation model for Stock contributors and will share details once Firefly is out of beta. ^BT Adobe released Firefly in March, focused on commercial use Back in March, Adobe released Firefly at its annual conference, Adobe Summit. Similar to popular tools like DALL-E 2, Stable Diffusion and Midjourney, its biggest differentiators were its unique access to the massive number of images within Adobe Stock and a user interface that would allow people to use Firefly via Photoshop, Illustrator and other tools for commercial use. Last week, Adobe also announced it will bring Firefly to enterprise users. It not only touted its “commercially-safe” approach, but said it also plans to provide enterprise customers with an indemnification against copyright claims for new imagery generated with Firefly, similar to what is currently in place for Adobe Stock. While it stands by the safety of Firefly, “if a customer is sued for infringement, Adobe would take over legal defense and provide some monetary coverage for those claims,” a company spokesperson said. Bradford Newman, who leads global law firm Baker McKenzie’s machine learning and AI practice in its Palo Alto office, said Adobe’s “commercially-safe” execution and indemnification offer is one of the first and the cleanest that he has seen — because Firefly was trained on Adobe Stock imagery provided by creators, and which Adobe says it has the ability to use for this purpose according to its Stock Contributor license agreement. “It’s like having, in a way, a closed ecosystem,” he said. “What you’re warranty-ing is access to an ecosystem that’s trained and runs on a clean dataset, which as a solution has been discussed and contemplated for a while, but has never to my knowledge been fully executed at an enterprise level.” Newman emphasized that he had not read Adobe’s agreement with Stock contributors and could not comment on it specifically. But Adobe’s Stock Contributor Agreement dated March 1, 2022 states : “You grant us a non-exclusive, worldwide, perpetual, fully-paid, and royalty-free license to use, reproduce, publicly display, publicly perform, distribute, index, translate, and modify the Work for the purposes of operating the Website; presenting, distributing, marketing, promoting, and licensing the Work to users; developing new features and services ; archiving the Work; and protecting the Work.” Experts say artists may have few options from a legal standpoint Legal experts say Adobe Stock artists and creators likely will not have the kind of legal leg to stand on that Adobe’s enterprise users will enjoy. Legal scholar Andres Guadamuz, a reader in intellectual property law at the University of Sussex in the U.K. who has been studying legal issues around generative AI, said that the language in Adobe’s Terms of Service tends to be very broad. “You give Adobe a license for perpetuity, for whatever medium shall be invented,” he said. “People don’t read those terms and conditions.” In addition, he said that he doesn’t believe an image generated using a model is a derivative of the billions of images in the dataset — so it would likely not infringe on an artist’s copyright. Newman agreed, adding that while he had not looked at the contracts Adobe Stock contributors signed, he did not think the artists’ argument was persuasive. “As I understand it, they’re saying we’re fine with the stock images being used for someone to buy and iterate on in Photoshop, but if it’s used as a dataset for generative AI, somehow there’s an issue and we’re being ripped off,” he said. But Nathaniel Bach, an attorney at Los Angeles-based Manatt, Phelps and Phillips who specializes in entertainment law, copyright and IP, pointed out that while he is not familiar with the Adobe Stock license, the current issues are part of an age-old conundrum around unanticipated technological use, such as Blu-Ray and DVDs and streaming. That is: Is future media covered by prior contracts? “Courts have wrestled with this and come to different decisions depending on how widespread the language is and how much time has passed since the contract was entered into,” he told VentureBeat by phone. “So this sort of feels new again, with AI.” Bach emphasized that while he doesn’t necessarily think Adobe’s actions are an overreach, he is sympathetic with the creators — he does a lot of artist advocacy work, particularly in the music space, he explained, where many agree that the industry needs to be careful about taking away the “lifeblood” of artists. “I think that one of the important things that’s happening now is that artists are speaking up and using their voices,” he said. Creativity, or a passable copy? “We hear the artists’ concerns,” said the Adobe spokesperson, adding that as the company speaks with the community, “we are also hearing a great deal of excitement for what these new tools can mean in terms of their productivity, and the creativity it can unlock for creators of any skill level.” But Dobi emphasized that this creativity can easily be simply a passable copy of another artist’s work — if an artist uses Firefly to create a standalone image through a prompt. “I don’t know if you’ve looked at my stock photography, but I’ve spent the last 20 years photographing abandoned buildings across the Northeast and I’ve built up quite a library of images of it, I’ve had a book published, I just had a piece in the New York Times ,” he explained. “Now I saw some AI artist [online] saying, ‘Show me your urban exploration photos built through AI, I built these through Adobe Firefly’ and I looked at these photos and they could pass as my photos, I wouldn’t question whether they were real photos unless you looked really closely. Someone using Firefly could easily put in a prompt with words like ‘mental asylum, symmetrical, natural light, peeling paint, textured walls, dirty floor,’ stuff like that.” For example, the following is one of Dodi’s Adobe Stock photos: And Dodi used prompts in Firefly to generate images that, while not identical, are certainly similar to his own work: Adobe Stock “not a feasible platform for us to operate in anymore” Samed said that Adobe Stock is “not a feasible platform for us to operate in anymore,” adding that the marketplace is “completely flooded and inundated with AI content.” Adobe should “stop using the Adobe Stock contributors as their own personal IP, it is just not fair,” he said, “and then the derivative that was created from that data scrape is then used to compete against the contributors that [built and supported] that platform from the beginning.” Dobi said he has noticed his stock photos have not been selling as well. “Someone can just type in a prompt now and recreate the images based off your hard work,” he said. “And Adobe, which is supposed to be, I mean, I guess they thought they were looking out for creators, apparently aren’t because they’re stabbing all their creators that helped create their stock library in the back.” Urquhart said that as an artist in his mid-50s who also does analog fine art, he feels he can “ride this out,” but he wonders about the next generation of artists who have only worked with digital tools. “You have very talented Gen Z artists, they have the most to worry about,” he said. “Like if all of a sudden AI takes over and iPad digital art is no longer relevant because somebody just typed in a prompt and got five versions of the same thing, then I can always just pick up my paintbrush.” From his perspective, Samed said, generative AI is “an arms race” using technology no one truly understands — and companies are moving too quickly and being reckless. “The damage that’s going to be done is going to be unlike anything we’ve ever seen before,” he said. “I’m in the process of selling my company, I’ve got out — I don’t want to participate or compete in this marketplace anymore.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,975
2,023
"Cisco updates Webex, aims to enhance hybrid work experiences with AI | VentureBeat"
"https://venturebeat.com/ai/cisco-updates-webex-aims-to-enhance-hybrid-work-experiences-with-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cisco updates Webex, aims to enhance hybrid work experiences with AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cisco today unveiled AI-powered enhancements across its Webex suite, promising to deliver hybrid work experiences with automation, while protecting customers’ confidentiality and privacy. The updates span workspace, collaboration and customer experience categories, built on the Webex platform, and join a long list of AI and machine learning (ML) features already embedded in Cisco products. The next step forward for such collaboration is video intelligence, which Webex is expanding throughout the conference room operating system RoomOS. With cinematic meeting experiences, cameras follow individuals through voice and facial recognition to capture the best angle of the active speaker. This ensures focus on the speaker, while making certain that hybrid workers not physically present in the room can still feel included, according to Cisco. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Once in a generation platform shift RoomOS uses facial detection, information about where people are sitting in a room and voice location to direct the meeting and provide the best view. The feature individually frames and levels participants at eye height, and in speaker mode, uses audio triangulation from devices and an intelligent beam-forming table microphone to quickly and accurately identify the position of the active speaker. Cinematic meetings support a range of camera intelligence features, including speaker mode, frames, presenter and audience tracking and meeting zones. “AI is fundamentally transforming the way we work and live,” Jeetu Patel, EVP and GM for security and collaboration at Cisco, told VentureBeat. “It has the potential to make collaboration radically more immersive, personalized and efficient.” Cisco studied what he described as a “once-in-a-generation platform shift” that AI could support. The company’s efforts center around re-imagining hybrid work. Targeting hybrid work experiences With the rise of hybrid work , it’s essential that organizations provide employees with the flexibility to work in different locations and in different ways. To address this, Cisco has introduced three new AI-based features into its Webex suite. This includes a super resolution function that ensures crystal-clear video in Webex meetings, even in low-bandwidth conditions. This is achieved through deep neural network video recovery that hides choppiness, removes blocking artifacts and reconstructs the face and body to render in high-resolution images and videos. Another new AI capability is smart re-lighting, which automatically enhances lighting in Webex meetings to ensure that people look their best in any environment. This is particularly useful when working in poor lighting conditions. The algorithm is trained to recognize different scenarios with people in different lighting, and automatically enhances the light on the facial foreground. The third new capability is a “be right back” update, which automatically puts up a BRB message, blurs the background, and mutes audio when a user steps away from a Webex meeting. This feature saves time and is simple to use. By leveraging a 3D face mesh algorithm, Webex can detect when a user has stepped away and replace their video feed with a BRB indicator until they return. Users can turn their audio and video back on when they are back in front of the screen. AI-powered chat summaries As customer expectations continue to rise and organizations handle billions of daily customer interactions, it has become challenging for agents and legacy systems to keep up with the volume and personalization required. To this end, Cisco ​​is introducing new AI capabilities for its customer experience solutions, including Webex Contact Center and Webex Connect. One of the new capabilities, topic analysis in Webex Contact Center, provides actionable insights to business analysts by surfacing key reasons customers are calling in. This feature is built using an AI large language model ( LLM ) that aggregates call transcripts and highlights trends for business analysts. Another capability, agent answers, acts as a real-time coach for human agents by listening and instantly surfacing knowledge-based articles and helpful information for the customer. This capability uses learnings from self-service and automated customer interactions and applies AI to ensure that the highest match probability options are identified first. Meanwhile, AI-powered chat summaries eliminate the need for agents to read lengthy digital chat histories and provide key takeaways in a quickly digestible format. Lastly, Webex Connect users can now describe the function they want to perform, and AI will generate and return the appropriate code instantly, making it easier to create and iterate customer journeys quickly. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,976
2,023
"Predictive analytics could be the future, but we must solve the data problem first | VentureBeat"
"https://venturebeat.com/enterprise-analytics/predictive-analytics-could-be-the-future-but-we-must-solve-the-data-problem-first"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Predictive analytics could be the future, but we must solve the data problem first Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. “Your voice is breaking up.” “We lost you for a minute there.” How many times have we all heard or said these things? Or what about the white wheel of endless buffering? We’ve all experienced the endless glitches, outages and broken app experiences that impact us more than we might care to admit. For enterprises, the move to the cloud and reliance on SaaS apps has made the internet the corporate backbone. The internet is the digital supply chain that ensures users have a great digital experience. But there’s zero certainty in knowing if your app and its multitude of components distributed across multiple cloud environments are actually performing up to par. So for today’s organizations looking to be more proactive and automated in how they operate and manage their environments: Is delivering a predictable digital experience across an unpredictable internet environment even in the cards? I would argue the answer is yes, but only if you solve the data problem first. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The data problem that is the internet IT operations is an overwhelming place to be right now. In today’s connected world, where every business, application, and device relies on a digital connection every hour of every day, driving superior digital experiences is critical. But with apps running in the cloud and being accessed from many remote endpoints, the number of new blind spots has created massive challenges for anyone tasked to troubleshoot broken user experiences. This complexity creates a networking model plagued by reactive-based troubleshooting, and user experience is regularly degraded. Network professionals tell us that responding to disruptions and accommodating new business needs are their top two network challenges. For these businesses, the pursuit of predictive intelligence is all about the ability to move from reactive to preventative, thereby pinpointing issues before they begin to affect user experience. Forecasting and taking back control over what is happening across the cloud have now become core to the enterprise network. Predictive intelligence: Unlocking efficiency gains and opportunities But predictive intelligence promises real productivity gains. For organizations with hybrid workforces, the gains can be significant. Predictively identifying a single service affecting fault and remediating it — such as by switching providers and paths that carry app traffic during peak periods — could save a single employee hours of downtime or degraded performance. Multiplied across the employee base, that number quickly becomes material. The same is true for satisfying consumer demand. In the age of exponential choice, proactively preventing any disruption is key to delivering the always-on digital experience buyers need and demand. In fact, expectations of digital experiences have soared. Unlocking efficiency gains and opportunities to drive brand value is the real payback of predictive intelligence. Sizing up the data-shaped challenge in predictive intelligence Troubleshooting is a largely reactive endeavor based on analysis and informed decision-making to improve situations or highlight potential root causes of an active incident. Determining what’s going, or has gone wrong, addresses an immediate need, but it doesn’t do anything to escape that cycle of users deserting your lagging application or unavailable cloud service. That’s the promise of the predictive Internet: The ability to leverage a rich data set and visualizations to analyze historical patterns across a complex mesh of owned and third-party networks to predict outages or service degradation and take remedial actions before the effects are felt by users. Predictive intelligence at this level is both a data problem and a scale problem. Solving these is key to making it an implementable reality. It takes an enormous amount of data to predict the beginnings of a degradation or performance deterioration with a high degree of accuracy. Although the volume of data needed to train a model has existed for some time, the data often wasn’t as clean as it needed to be. That caused flow–on effects in statistical models. Without good data, the models simply weren’t capable of producing granular assessments and actionable recommendations. With the modelling technology now mature and supported by high-quality data collected from across a customer’s wide area network, predictive intelligence is firmly within reach. A guiding hand So what does predictive intelligence look like today? It starts with visibility and ends with trust. Data-driven visibility that provides insight into the cloud and internet environments that an organization doesn’t own — but that has become part of a corporate network and thereby critical as a delivery mechanism of digital experiences — is critical. And just as important is complementing that visibility with owned data from an analytics model that learns from past behavior and forecasts future events. Third, and perhaps most importantly, is recommending what action to take based on data and insight of continuous performance measurement and assessment. Giving up control of IT infrastructure is an impossible ask without building trust first. Recommendations build trust. Trust that the data is right, and trust that the recommended action will provide the intended outcome. Predictive intelligence should be thought of as a guiding hand that helps businesses see and measure performance across all networks that impact the user experience, forecasts issues based on historical data and influences decision-making. Mohit Lad is cofounder and GM of Cisco ThousandEyes. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,977
2,022
"Why synthetic data makes real AI better | VentureBeat"
"https://venturebeat.com/ai/why-synthetic-data-makes-real-ai-better"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why synthetic data makes real AI better Share on Facebook Share on X Share on LinkedIn Design Engineering Science As A Modern Abstract Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data is precious – so it’s been asserted; it has become the world’s most valuable commodity. And when it comes to training artificial intelligence (AI) and machine learning (ML) models, it’s absolutely essential. Still, due to various factors, high-quality, real-world data can be hard – sometimes even impossible – to come by. This is where synthetic data becomes so valuable. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Synthetic data reflects real-world data, both mathematically and statistically, but it’s generated in the digital world by computer simulations, algorithms, statistical modeling, simple rules and other techniques. This is opposed to data that’s collected, compiled, annotated and labeled based on real-world sources, scenarios and experimentation. The concept of synthetic data has been around since the early 1990s, when Harvard statistics professor Donald Rubin generated a set of anonymized U.S. Census responses that mirrored that of the original dataset (but without identifying respondents by home address, phone number or Social Security number). Synthetic data came to be more widely used in the 2000s, particularly in the development of autonomous vehicles. Now, synthetic data is increasingly being applied to numerous AI and ML use cases. Synthetic data vs. real data Real-world data is almost always the best source of insights for AI and ML models (because, well, it’s real). That said, it can often simply be unavailable, unusable due to privacy regulations and constraints, imbalanced or expensive. Errors can also be introduced through bias. To this point, Gartner estimates that through 2022, 85% of AI projects will deliver erroneous outcomes. “Real-world data is happenstance and does not contain all permutations of conditions or events possible in the real world,” Alexander Linden, VP analyst at Gartner, said in a firm-conducted Q&A. Synthetic data may counter many of these challenges. According to experts and practitioners, it’s often quicker, easier and less expensive to produce and doesn’t need to be cleaned and maintained. It removes or reduces constraints in using sensitive and regulated data, can account for edge cases, can be tailored to certain conditions that might otherwise be unobtainable or have not yet occurred, and can allow for quicker insights. Also, training is less cumbersome and much more effective, particularly when real data can’t be used, shared or moved. As Linden notes, sometimes information injected into AI models can prove more valuable than direct observation. Similarly, some assert that synthetic data is better than the real thing – even revolutionary. Companies apply synthetic data to a variety of use cases: software testing, marketing, creating digital twins, testing AI systems for bias, or simulating the future, alternate futures or the metaverse. Banks and financial institutions use synthetic data to explore market behaviors, make better lending decisions or combat financial fraud, Linden explains. Retailers, meanwhile, rely on it for autonomous checkout systems, cashier-less stores and analysis of customer demographics. “When combined with real data, synthetic data creates an enhanced dataset that often can mitigate the weaknesses of the real data,” Linden says. Still, he cautions that synthetic data has risks and limitations. Its quality depends on the quality of the model that created it, it can be misleading and lead to inferior results, and it may not be “100% fail-safe” privacy-wise. Then there’s user skepticism – some have referred to it as “fake data” or “inferior data.” Also, as it becomes more widely adopted, business leaders may raise questions about data generation techniques, transparency and explainability. Real-world growth for synthetic data In an oft-quoted prediction from Gartner, by 2024, 60% of data used for the development of AI and analytics projects will be synthetically generated. In fact, the firm said that high-quality, high-value AI models simply won’t be possible without the use of synthetic data. Gartner further estimates that by 2030, synthetic data will completely overshadow real data in AI models. “The breadth of its applicability will make it a critical accelerator for AI,” Linden says. “Synthetic data makes AI possible where lack of data makes AI unusable due to bias or inability to recognize rare or unprecedented scenarios.” According to Cognilytica, the market for synthetic data generation was roughly $110 million in 2021. The research firm expects that to reach $1.15 billion by 2027. Grand View Research anticipates the AI training dataset market to reach more than $8.6 billion by 2030, representing a compound annual growth rate (CAGR) of just over 22%. And as the concept grows, so too do the contenders. An increasing number of startups are entering the synthetic data space and receiving significant funding in doing so. These include Datagen , which recently closed a $50 million series B; Gretel.ai, with a $50 million series B; MostlyAI , with a $25 million series B; and Synthesis AI , with a $17 million series A. Other companies in the space include Sky Engine, OneView, Cvedia and leading data engineering company Innodata, which recently launched an ecommerce portal where customers can purchase on-demand synthetic datasets and immediately train models. Several open-source tools are also available: Synner, Synthea, Synthetig and The Synthetic Data Vault. Similarly, Google, Microsoft, Facebook, IBM and Nvidia are already using synthetic data or are developing engines and programs to do so. Amazon, for its part, has relied on synthetic data to generate and fine-tune its Alexa virtual assistant. The company also offers WorldForge, which enables the generation of synthetic scenes, and just announced at its re:MARS (Machine Learning, Automation, Robotics and Space) conference last week that its SageMaker Ground Truth tool can now be used to generate labeled synthetic image data. “Combining your real-world data with synthetic data helps to create more complete training datasets for training your ML models,” Antje Barth, principal developer advocate for AI and ML at Amazon Web Services (AWS) said in a blog post published in conjunction with re:MARS. How synthetic data enhances the real world, enhanced Barth described the building of ML models as an iterative process involving data collection and preparation, model training and model deployment. In starting out, a data scientist might spend months collecting hundreds of thousands of images from production environments. A major hurdle in this is representing all possible scenarios and annotating them correctly. Acquiring variations might be impossible, such as in the case of rare product defects. In that instance, developers may have to intentionally damage products to simulate various scenarios. Then comes the time-consuming, error-prone, expensive process of manually labeling images or building labeling tools, Barth points out. AWS introduced SageMaker Ground Truth, the new capability in Amazon’s data labeling service, to help simplify, streamline and enhance this process. The new tool creates synthetic, photorealistic images. Through the service, developers can create an unlimited number of images of a given object in different positions, proportions, lighting conditions and other variations, Barth explains. This is critical, she notes, as models learn best when they have an abundance of sample images and training data enabling them to calculate numerous variations and scenarios. Synthetic data can be created through the service in enormous quantities with “highly accurate” labels for annotations across thousands of images. Label accuracy can be done at fine granularity – such as subobject or pixel level – and across modalities including bounding boxes, polygons, depth and segments. Objects and environments can also be customized with variations in such elements as lighting, textures, poses, colors and background. “In other words, you can ‘order’ the exact use case you are training your ML model for,” Barth says. She adds that “if you combine your real-world data with synthetic data, you can create more complete and balanced datasets, adding data variety that real-world data might lack.” Any scenario In SageMaker Ground Truth, users can request new synthetic data projects, monitor them in progress, and view batches of generated images once they are available for review. After establishing project requirements, an AWS project development team creates small test batches by collecting inputs including reference photos and 2D and 3D sources, Barth explains. These are then customized to represent any variation or scenario – such as scratches, dents and textures. They can also create and add new objects, configure distributions and locations of objects in a scene, and modify object size, shape, color and surface texture. Once prepared, objects are rendered via a photorealistic physics engine and automatically labeled. Throughout the process, companies receive a fidelity and diversity report providing image- and object-level statistics to “help make sense” of synthetic images and compare them with real images, Barth said. “With synthetic data,” she said, “you have the freedom to create any imagery environment.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,978
2,022
"Why synthetic data may be better than the real thing | VentureBeat"
"https://venturebeat.com/business/why-synthetic-data-may-be-better-than-the-real-thing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why synthetic data may be better than the real thing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. To deploy successful AI, organizations need data to train models. That said, high-quality data isn’t always easy to access – creating a major hurdle for organizations in launching AI initiatives. This is where synthetic data can be so useful. As opposed to data that is collected from and measured in the real world, synthetic data is generated in the digital world by computer simulations, algorithms, simple rules, statistical modeling, simulation, and other techniques. It is an alternative to real-world data, but it reflects real-world data, mathematically and statistically. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Some experts even contend that synthetic data is better than real-world people, places, and things when it comes to training AI models. Constraints in using sensitive and regulated data are removed or reduced; datasets can be tailored to certain conditions that might otherwise be unobtainable; insights can be gained much more quickly; and training is less cumbersome and much more effective. To that point, Gartner projects synthetic data to completely overshadow real data in AL models by 2030. “The fact is you won’t be able to build high-quality, high-value AI models without synthetic data,” according to the Gartner report. Leaders in synthetic data To support accelerating demand, a growing number of companies are offering synthetic models – top and emerging companies in the space include Mostly AI, AI.Reverie, Sky Engine, and Datagen. Leading data engineering company Innodata has also entered the market, today launching an e-commerce portal where customers can purchase on-demand synthetic datasets and immediately train models. “The kind of datasets we’re going after reflect real-world problems that CIOs and customers have come back to us with,” said CPO Rahul Singhal. “We began looking at: How do we create large amounts of training data that machines need?” The Innodata AI Data Marketplace has been developed by in-house experts specifically for building and training AI/ML models. The data packs are off-the-shelf, easily previewable, unbiased, diverse, thorough, and secure, according to Singhal. Innodata is initially releasing 17 data packs in four languages that home in on financial services. These packs are textual, meaning they include invoices, purchase orders, and banking and credit card statements. “One of the big needs in AI is diversity of data,” said Singhal. “We need lots of diverse ways that invoice can be created, we need visibility. It seems very easy, but it’s actually really complicated.” The marketplace compliments Innodata’s open-source repository of more than 4,000 datasets. These help in the prototyping of supervised and unsupervised ML projects. The new synthetic datasets take that to the next level based on real-world information. “Machines learn by seeing real-world examples,” Singhal said. For instance, he pointed to the many ways in which a credit card statement could be structured – one could have names listed on the right side; another on the left; one could use a table format; another a column format. To be accurate, machines have to be provided with those variations, and in both quality and quantity. Innodata models have been provided with hundreds of templates to allow for such variations and to replicate true scenarios. “Machine learning (ML) depends on a diversity of datasets,” Singhal said. “We create real-world data sets as much as possible and replicate what real-world document types will look like.” Why synthetic data ? Among their many advantages, synthetic datasets are free from personal data and therefore not subject to compliance restrictions or other privacy protection laws, Singhal pointed out. This also shields against security breaches. Biases are removed to help automate workflows and enable predictive modeling. Singhal pointed out that, “things in the real world are not pristine,” and that people can smudge banking statements or accidentally or purposely obfuscate things. Ultimately, synthetic data will be an important tool in driving the adoption of AI, Singhal said. The eventual intent with Innodata’s marketplace is to expand to third-party AI training data sets, as well as beyond documents to images, video, audio, and speech (the latter in response to the growth in conversational AI). These datasets will also span industries – telecom and utilities, transportation and logistics, energy services, pharmaceuticals, hospitality, insurance, retail, healthcare – and will be provided in an expanding number of languages so that data scientists can build from a global perspective. “Our goal is to create a vibrant marketplace where companies can contribute datasets and monetize data sets,” Singhal said. “This has the potential of democratizing data for AI.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,979
2,023
"Data protection regulations aren't enough to safeguard your data | VentureBeat"
"https://venturebeat.com/security/data-protection-regulations-arent-enough-to-safeguard-your-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Data protection regulations aren’t enough to safeguard your data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data protection regulations have undoubtedly had a positive impact on the ways organizations protect sensitive customer data. From the worldwide Payment Card Industry Data Security Standard (PCI-DSS) to the EU’s General Data Protection Regulation (GDPR) , such regulations provide an important framework to ensure that organizations increase their data protection practices and strengthen their security posture. But achieving compliance won’t deter cyber criminals and keep data secure. With more than 236 million ransomware attacks taking place in the first half of 2022 — and the number of attacks continuing to rise — data protection is one of the biggest concerns for organizations 2023. This is so much so that 79% of IT leaders see a worrying ‘Protection Gap’ between tolerable data loss and how IT is protecting their data. This means that complying with regulations is no longer enough to safeguard data. Instead, organizations need to implement a robust modern data protection strategy. Some see regulations as a tick-box exercise While the global PCI-DSS aims to enhance security for consumers by providing guidelines for any organization that accepts, stores, processes or transmits credit card information, GDPR imposes tough security obligations for organizations that operate within — or conduct business with — EU firms and collect data related to individuals in the EU. However, GDPR will soon be replaced in the UK by the Data Protection and Digital Information Bill , an updated piece of legislation that will impact every organization operating in the UK and handling personal data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These regulations provide a critical framework to protect sensitive customer data and mandate that a certain level of security measures are in place. But the challenge is that some organizations subject to ‘light-touch’ regulations may see them as largely a tick-box exercise and just do the minimal requirements. Such an approach will short-change them, depriving them of operational improvements or business won that true compliance can deliver. Organizational resilience, however, must be more than just a regulatory framework or ISO standard deep. Instead, it must embrace every facet of a company from the board down and be supported by policies that permeate the business to create a culture of compliance. Organizations must also bolster their security posture with an additional data protection strategy. Because achieving compliance is no longer enough to protect your data from cyberattacks. Emerging data protection gap Ransomware is the biggest global cyber threat facing organizations today, and attacks are rising. In fact, 76% of UK and Ireland organizations admitted to falling prey to at least one ransomware attack in the past year. And as a result, 65% now use cloud services as part of their data protection strategy. More concerning, though, is the fact that the majority of organizations disclosed gaps between their data dependency, backup frequency, service level agreements and ability to return to productive business following a cyberattack. This means that many can be left vulnerable when they experience a further attack. Given that we now live in the age of not ‘if’, or ‘when’, but ‘how many times’ an organization can expect to be attacked, this is a precarious position to be in. While data protection budgets have been increasing to improve system availability and faster disaster recovery, they’re still not rising fast enough to keep up with accelerating workloads and surging threats. Decelerating an organization’s digital transformation strategy would theoretically give data protection strategies a chance to catch up, but as many firms turn to crisis-driven innovation to survive the economic downturn, applications and workloads are expected to continue to scale. If data protection budgets don’t rise alongside this, the gap will only grow wider. Paring back budgets on the very projects that could accelerate growth, improve agility and mobility and delivery a competitive edge would be counterproductive. A better way is to evolve the nature of data protection so that it safeguards existing and future ecosystems. Attackers increasingly target backup repositories Organizations are also losing the battle when it comes to defending against ransomware attacks with hackers increasingly targeting backup repositories and holding that data to ransom. While 88% of ransomware attacks attempted to infect backup repositories to disable victims’ abilities to recover without paying the ransom, 75% of those attempts were successful. Furthermore, one in three organizations say that most or all of their backup repositories have been impacted as part of a ransomware attack. However, 22% of organizations think they could have recovered without paying any ransom if they had sufficient data protection in place. So, instead of being reactive, organizations need to be far more proactive when it comes to data protection. Technologies for survival While it’s becoming increasingly common for ‘production’ to outpace ‘protection,’ the growing gap between what organizations expect and what IT is expected to deliver is worrying. Then, if you add in the fact that ransomware is almost a guaranteed threat that every organization must prepare for, we are headed for a data protection emergency. But what’s more concerning is the effectiveness with which attackers proactively destroy their victim’s data backup repositories. Currently, 84% of organizations rely on backup logs or media readability to assure recoverability, meaning that only 16% routinely test by restoring and testing functionality. To protect their data, organizations need a secure, immutable backup in place as a last line of defense. And while IT departments are under pressure to cut costs, data protection budgets should never be reduced. By investing wisely and taking a modern approach to data protection, organizations not only gain an advantage over attackers but increase business resiliency, giving them an edge over competitors. Safeguard your future As the threat landscape accelerates, organizations must adopt a two-pronged approach when it comes to data protection. Complying with regulations and ensuring that they permeate an entire organization is important, but ensuring that sufficient data protection measures are in place is critical. IT and data protection teams, therefore, have a big task ahead of them to ensure that they close the gap between technology and how well it is backed up and protected. After all, safeguarding your sensitive data plays a significant part in safeguarding your future. Dan Middleton is VP for UK and Ireland at Veeam. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,980
2,021
"The rise of Kubernetes and its impact on enterprise databases | VentureBeat"
"https://venturebeat.com/datadecisionmakers/the-rise-of-kubernetes-and-its-impact-on-enterprise-databases"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The rise of Kubernetes and its impact on enterprise databases Share on Facebook Share on X Share on LinkedIn Quintin Gellar/Pexels Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Denis Souza Rosa, Developer Advocacy Manager at Couchbase. In 1777, British mathematician, Jesse Ramsden, published a paper describing the design of a screw-cutting lathe. This machine represented a big technology breakthrough, as producing screws at scale enabled heavy and complex machinery to be produced faster during the industrial revolution. Today, Kubernetes and Operators are the screw-cutting lathes for stateful applications. With this combination, any software vendor is capable of providing fully managed services at a reasonable cost. The most famous example of stateful applications in tech are databases , which developers expected to work out-of-the-box, but historically are exactly the opposite. The task of maintaining it falls on the shoulders of DevOps engineers in small to medium companies, while in big enterprises, enterprise databases are so critical that it is common to see a specialized department for data management. There is not much room for failure in this area; data is commonly one of the most valuable assets of a company. Because of that, developers and database administrators (DBAs) have always been conservative while picking the next data storage for a project, even if it means picking a suboptimal one. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The truth is, they are not wrong. Storing and retrieving data from different sources doesn’t pose a challenge for most developers, but the learning curve to manage and size it properly could be steep. The end result is that only big enterprises have enough resources to train their teams to produce scalable and cost-effective software, while other companies often prefer to stay on the “safe side” by using relational in suboptimal scenarios. This “safe side” behavior might lead in the midterm to performance and scalability issues that, following the trend, would probably be solved on the application level with things like microservices, which adds a whole new layer of complexity, while simply staying within the same architecture but switching to a more adequate data storage would address the same problem. Databases-as-a-service anywhere This long introduction aims to state one simple problem: Developers understand that specialized databases could be a crucial success factor for their applications, but the upfront investment sometimes is higher than what they can afford. AWS was the first big company to realize that when they launched DynamoDB in 2012. Back then, launching a database-as-a-service (DBaaS) was something that only big players could do, as frequent tasks like version upgrades, recovery of faulty nodes, data replications, or even a basic thing like provisioning a simple database required some sort of infrastructure automation, which had to be written from scratch. In the majority of cases, the automation code was tightly coupled with the infrastructure that it was running on, which would also push companies to create their own private clouds to avoid anchoring their strategy and costs to third-party providers. Borg was one of these in-house solutions developed by Google, and it would later become the sprouts of what Kubernetes is today. One of the success factors of Kubernetes was its extensibility. It allows the deployment applications called “Operators” to react to events triggered in the cluster. This feature enabled enterprise database vendors to build specialized apps that could monitor their databases and act accordingly in case of a state change, which virtually can provide a DBaaS-like experience in any Kubernetes cluster. Couchbase was the first company to release an official operator back in 2017, which made some noise in the Kubernetes/NoSQL world and created a wave of other companies trying to do something similar. Community-driven operators have also been quite popular; databases like PostgreSQL and MySQL have various operators available, including a few enterprise databases actively maintained by large organizations. Developer groups around this topic are starting to pop up everywhere: the DOK Community (Data on Kubernetes) is a clear example of that. Despite the fast community adoption and the stellar progress made in the last 4 years, including all major cloud providers launching their fully-managed Kubernetes services, the main challenge for companies to adopt this kind of technology is the steep learning curve of Kubernetes itself. The future of enterprise databases-as-a-service Providing fully-managed services became so accessible that it even became a business model for some cloud providers. Most of these technologies were open-source, so all they had to do was to add a user-friendly façade on top of it. This strategy had a heavy impact on the revenue of some vendors, which forced them to change their licenses. MongoDB was the first one, moving to SSPL in 2018, followed by Redis (RSAL) and ElasticSearch (ELv2). Other databases like MariaDB decided to follow a different path; they all changed their licenses to BSL, which is usually converted to another license (often Apache 2) after two to four years on average. There is no right or wrong here, but open-source has always been the foundation of software development, and a license that protects the company’s intellectual property for a given time and then releases it to the public when the code still is relevant seems to be a reasonable approach to me. The rise of DBaaS, Kubernetes, and Operators should help the adoption of NoSQL to skyrocket in the following years, as they can deliver better performance, lower cost, and higher productivity, but this time without the upfront cost of learning how to manage it. Because of that, the database market currently controlled by RDBMS should become much more diverse. All this activity will benefit the whole developer community and how we build effective software. Denis Souza Rosa is a Developer Advocacy Manager at Couchbase. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,981
2,022
"Deci deep-learning platform aims to ease AI application development | VentureBeat"
"https://venturebeat.com/business/deci-deep-learning-platform-aims-to-ease-ai-application-development"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Deci deep-learning platform aims to ease AI application development Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Deci , a deep-learning software maker that uses AI templates designed to create AI-based applications, today launched v2.0 of its development platform, which it claims speeds the way for developers to build, optimize and deploy computer vision models. The term “speed” and AI application development are rarely used in the same sentence, but by using this platform, resulting AI models can be more swiftly prepared to run on any hardware and environment, including cloud, edge and mobile – with accuracy and high runtime performance, Deci CEO and co-founder Yonatan Geifman said in a media advisory. This is because much of the grunge work has been eliminated by the Deci series of DeciNet templates made available in the v2.0 platform. Using Deci, the company says, AI developers can achieve improved inference performance and efficiency to enable effective deployments on resource-constrained edge devices, maximize hardware use and reduce training and inference cost, Geifman said. The entire development cycle is shortened – saving upfront costs – and the uncertainty of how the model will deploy on the inference hardware is eliminated, he said. Deci’s platform, powered by its proprietary neural architecture search (NAS) engine called AutoNAC (Automated Neural Architecture Construction), is designed to enable AI developers to automatically build efficient computer vision models that provide previously tested accuracy for required inference hardware, speed, size and targets. DeciNet models generated by Deci outperform other known state-of-the-art architectures by a factor of three times to 10 times, Geifman said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Addressing AI dev struggles AI developers generally have struggled to develop production-ready deep-learning models for deployment in a reasonable amount of time. These challenges can largely be attributed to the AI efficiency gap facing the industry, in which algorithms are growing more powerful and complex, but available compute power isn’t keeping pace with demand. This gap also creates financial barriers by making the deep-learning development and processing more cumbersome and expensive, Geifman said. While NAS has been presented as a potential solution to automate the design of superior artificial neural networks that can outperform manually designed architectures, the resource requirements to operate such technology are excessive for most companies. So far, NAS has only been successfully implemented by tech giants with large AI teams such as Google, Facebook and Microsoft and in the academic community, indicating its impracticality for the vast majority of developers. Developers can start their projects with the DeciNet pretrained and optimized models generated by the AutoNAC engine for a wide range of hardware and computer vision tasks or use the AutoNAC engine to generate more custom architectures that are tailored for their specific use-cases, Geifman said. In addition, the platform supports teams with the wide range of tools required to develop deep learning-based applications. These include a hardware-aware PyTorch model to easily select and benchmark models and hardware, SuperGradients — an open-source training library housed on GitHub with proven recipes for faster training, automated runtime optimizations, model packaging and more, Geifman said. With Deci’s v2.0 platform, AI developers can accomplish the following: Benchmark models and inference hardware: With Deci’s hardware-aware model zoo, developers can measure the inference time of pretrained and optimized models on and various hardware including edge devices via Deci’s SaaS platform. Generate tailored SOTA CNN architectures: Automatically find accurate and efficient architectures tailored for the application, hardware and performance targets with Deci’s AutoNAC engine. Simplify training with SuperGradients : Use proven hyperparameter recipes and with Deci’s PyTorch-based open-source training library called SuperGradients. Automated runtime optimization: Automatically compile and quantize models and evaluate different production settings. Deploy with a few lines of code: Developers can deploy deep learning workloads in any environment with Deci’s Python-based inference engine. Deci’s platform includes these three tiers: Free Community Tier: For data scientists and ML engineers looking to find the best models, simplify hardware evaluation and boost runtime performance. Professional Tier: For deep-learning teams looking to quickly achieve production-grade inference performance and shorten development time. Enterprise Tier: For deep-learning experts looking to meet specific performance goals for highly customized use cases. Deci competes in the market against Datagen, Reverie, Simerse, Zumo Labs, CVEDIA, Masterful AI, Mostly AI, OneView, Synthesis AI and Sky Engine. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,982
2,022
"AI and low/no code: What they can and can’t do together | VentureBeat"
"https://venturebeat.com/dev/ai-and-low-no-code-what-they-can-and-cant-do-together"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI and low/no code: What they can and can’t do together Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial Intelligence (AI) is in the fast lane and driving toward mainstream enterprise acceptance, but, at the same time, another technology is making its presence known: low-code and no-code programming. While these two initiatives inhabit different spheres within the data stack, they nevertheless offer some intriguing possibilities to work in tandem to vastly simplify and streamline data processes and product development. Low-code and no-code are intended to make it simpler to create new applications and services, so much so that even nonprogrammers – i.e., knowledge workers who actually use these apps – can create the tools they need to complete their own tasks. They work primarily by creating modular, interoperable functions that can be mixed and matched to suit a wide variety of needs. If this technology can be combined with AI to help guide development efforts, there’s no telling how productive the enterprise workforce can become in a few short years. Intelligent programming Venture capital is already starting to flow in this direction. A startup called Sway AI recently launched a drag-and-drop platform that uses open-source AI models to enable low-code and no-code development for novice, intermediate and expert users. The company claims this will allow organizations to put new tools, including intelligent ones, into production quicker, while at the same time fostering greater collaboration among users to expand and integrate these emerging data capabilities in ways that are both efficient and highly productive. The company has already tailored its generic platform for specialized use cases in healthcare, supply chain management and other sectors. AI’s contribution to this process is basically the same as in other areas, says Gartner’s Jason Wong – that is, to take on rote, repetitive tasks, which in development processes includes things like performance testing, QA and data analysis. Wong noted that while AI’s use in no-code and low-code development is still in its early stage, big hitters like Microsoft are keenly interested in applying it to areas like platform analysis, data anonymization and UI development, which should greatly alleviate the current skills shortage that is preventing many initiatives from achieving production-ready status. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Before we start dreaming about an optimized, AI-empowered development chain, however, we’ll need to address a few practical concerns, according to developer Anouk Dutrée. For one thing, abstracting code into composable modules creates a lot of overhead, and this introduces latency to the process. AI is gravitating increasingly toward mobile and web applications, where even delays of 100 ms can drive users away. For back-office apps that tend to quietly churn away for hours this shouldn’t be much of an issue, but then, this isn’t likely to be a ripe area for low- or no-code development either. AI constrained Additionally, most low-code platforms are not very flexible, given that they work with largely pre-defined modules. AI use cases, however, are usually highly specific and dependent on the data that is available and how it is stored, conditioned and processed. So, in all likelihood, you’ll need customized code to make an AI model function properly with other elements in the low/no-code template, and this could end up costing more than the platform itself. This same dichotomy impacts functions like training and maintenance as well, where AI’s flexibility runs into low/no-code’s relative rigidity. Adding a dose of machine learning to low-code and no-code platforms could help loosen them up, however, and add a much-needed dose of ethical behavior as well. Persistent Systems’ Dattaraj Rao recently highlighted how ML can allow users to run pre-canned patterns for processes like feature engineering, data cleansing, model development and statistical comparison, all of which should help create models that are transparent, explainable and predictable. It’s probably an overstatement to say that AI and no/low-code are like chocolate and peanut butter, but there are solid reasons to expect that they can enhance each other’s strengths and diminish their weaknesses in a number of key applications. As the enterprise becomes increasingly dependent on the development of new products and services, both technologies can remove the many roadblocks that currently stifle this process – and this will likely remain the case regardless of whether they are working together or independently. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,983
2,022
"Addressing the cybersecurity talent gap: New programs from (ISC)2 | VentureBeat"
"https://venturebeat.com/security/addressing-the-cybersecurity-talent-gap-new-programs-from-isc2"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Addressing the cybersecurity talent gap: New programs from (ISC)2 Share on Facebook Share on X Share on LinkedIn Cybersecurity hacking Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cyberattacks, breaches, hacks and ransomware are on the rise — that should come as no news. And, according to many experts, one of the significant reasons behind this is a long-lamented cybersecurity talent shortage. To help address this workforce gap — and to also combat burnout of existing talent and enable businesses to stay ahead of hackers — the global cybersecurity nonprofit , (ISC)2 , this week announced three significant new initiatives. “The cybersecurity profession is at a critical inflection point in its evolution,” said Clar Rosso, CEO of (ISC)2. “The field is poised for rapid growth and expansion, and it will take people from all backgrounds all across the world to help build a safe and secure cyber world.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Supporting candidate growth According to the most recent Cybersecurity Workforce Study from (ISC)2, the global cybersecurity workforce needs to grow 65% to effectively defend organizations’ critical assets. To help combat a workforce gap of more than 2.7 million people, the nonprofit’s three new initiatives include: (ISC)2 Certified in Cybersecurity : This entry-level certification exam evaluates candidates in the areas of security principles; business continuity, disaster recovery and incident response concepts; access controls concepts; network security; and security operations. More than 1,500 pilot participants who passed the exam are on their way to full (ISC)2 certification and membership, said Rosso. As members, they gain access to continuing education, thought leadership, peer support, industry events and other professional development opportunities — ultimately allowing them to expand their experience and work toward more advanced and specialized certifications. (ISC)2 One Million Certified in Cybersecurity is now open for enrollment. This follows the nonprofit’s recent announcement at the White House pledging to provide free entry-level cybersecurity certification exams and self-paced courses to one million new cybersecurity professionals. ( ISC)2 Candidate Program : Individuals considering a career in cybersecurity will have free access to exclusive resources and benefits and discounts on all certification education courses. Barriers to entry, identifying candidates (ISC)2 has been developing these programs for almost a year, said Rosso. They supplement its well-known Certified Information Systems Security Professional (CISSP) certification and work through its charitable foundation Center for Cyber Safety and Education. The nonprofit has 168,00 members — professionals from all areas of the cybersecurity field. Rosso pointed out that one of the most persistent cybersecurity staffing challenges is identifying entry-level and junior-level candidates with the right skills and aptitude to learn and grow on the job. “At the same time, early career hopefuls are unable to demonstrate their understanding of cybersecurity concepts and gain the attention of hiring managers,” said Rosso. In a 2021 survey from Champlain College Online , for instance, cybersecurity professionals identified their top barriers to entry as high expectations for prior training or work experience and lack of diversity and inclusion. And, (ISC)2 research suggests that organizations that focus on recruiting and developing entry-level cybersecurity staff — including those with little or no technical experience — helps accelerate the “invaluable hands-on training” that the next generation of professionals need, said Rosso. Ultimately, “to build resilient teams at all levels, we believe creating more opportunities for entry and junior-level practitioners is one solution we can employ to help address the workforce gap ,” she said. Increased breaches — yet lack of action The new initiatives come amidst, and are largely prompted by, growing cyberattacks — and increasingly sophisticated and costly ones at that. By one estimate, the average cost of a data breach is up to $4.35 million this year. “Cyber breaches are escalating at an alarming trajectory for all sizes of organizations and governments across the globe,” said Rosso. She pointed out that many organizations fall victim to cyberattacks due to vulnerabilities and inadequacies in their defenses — issues that professionals say they could more effectively address if they had enough people. “It really is that simple,” she said. “We need more people in the roles of defending organizations.” So, why aren’t organizations doing more? “While the most apparent factor is simply demand outstripping supply of qualified individuals, there are more nuanced reasons for the gap,” said Rosso. Notably, organizations are failing to address cybersecurity needs as a “strategic imperative” — many, at their own peril, still consider cybersecurity to be a back office, optional expense. When money for staffing is limited, organizations tend to look for the most highly qualified individuals with years of hands-on experience. But these are in short supply. The majority of work to be done is well-suited for entry or junior-level staff, said Rosso, but organizations are sometimes unwilling to invest the necessary six to eight months of on-the-job training that is required to bring newcomers up to speed. “Decades of cybersecurity being a small but mighty club of individuals with very similar education and work experience has led to a build up of unconscious bias that impedes hiring or advancing diverse individuals,” said Rosso. Organizations must step up Still, these initiatives, while significant, are just one way to combat the growing problem. Organizations must invest in people, hire entry and junior level staff and upskill them, said Rosso. They have to “raise the cyber literacy of all,” she said, while encouraging a new generation of individuals from all backgrounds to consider careers in the field. (ISC)2 is taking a broad perspective on the issue: Focusing on increasing diversity in the profession and encouraging more women and minorities to consider cybersecurity as a career — and one that can be very rewarding, said Rosso. In fact, half of the nonprofit’s one million pledge will be through partner organizations that actively serve under-represented groups. “We encourage employers and governments to prioritize cybersecurity as a strategic imperative,” said Rosso. “We encourage shattering the notion of who would be good at cyber, and instead start with looking at an individual’s non-technical skills and motivations, and then train for the technical.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,984
2,022
"Starting your development journey into the world of Web3 | VentureBeat"
"https://venturebeat.com/datadecisionmakers/starting-your-development-journey-into-the-world-of-web3"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Starting your development journey into the world of Web3 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. What’s the hottest job on the market? Software engineers, programmers, and designers have been in high demand over the last decade. However, with the rise of blockchain and cryptocurrency, Web3 developers have quickly risen on the list. Web3 has seen a massive influx of interest over the past two years. The startup scene is on fire as new projects sprout up and innovation flourishes. Even some of the largest companies in the world such as Nike and Adidas have thrown their hats in the ring. All of this has made Web3 developers a hot commodity. But despite the massive demand, Web3 developers are in short supply. The concept of Web3 is still a relatively new idea and has only existed since 2014. There aren’t too many college courses that incorporate blockchain, let alone material on a concept that’s still taking shape. This was just one of the few reasons that led us to create a full stack geared towards developing in Web3. It might not be a surprise, then, that Web3 developers can command a pretty hefty price tag. In fact, some put these salaries between $300,000 and $750,000. This may just be one of the big reasons why developers at Meta (formerly Facebook) and Google are making the switch to Web3. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But Silicon Valley isn’t the only space that’s seeing a growing migration of developers. In fact, Web3 is attracting an entirely new wave of talent. Take Redfoo for example. The Billboard artist ditched his music career to pursue his passion for coding. The self-taught celebrity has since learned Solidity and Rust, and now operates as a partner with Radix. So why can developers demand such high salaries, and why are companies paying it? The answer lies within the potential of Web3. What is Web3? Many consider Web3 to be the next step in the evolution of the internet. Web1 is classified as the initial phase of the internet. Websites were just simple pages with text and the occasional picture. These sites didn’t offer much beyond the information displayed on them. Web2 came about as websites became more engaging and provided utility. At this stage, the internet is known for its most popular uses: social media, eCommerce, and entertainment. But Web2 also saw the internet become highly contained and controlled by large corporations. Internet users now experience the web through the products and services of companies such as Google, Meta, and Amazon. Web3 looks to separate itself from the control of these large organizations by utilizing the blockchain as its basis. Due to its decentralized nature, blockchain helps to avoid these types of gatekeepers while also providing more functionality and utility through things such as cryptocurrency. So what does it take to become a Web3 developer? Find the right programming language One of the first steps in Web3 development is becoming accustomed to the many programming languages available. Solidity is one of the most popular languages and is employed by Ethereum and numerous other blockchains. Other popular options include JavaScript, Python and Rust. Depending on the blockchain being built on, one programming language will make more sense than another. For example, Rust will help write smart contracts on Solana, while Plutus is used on Cardano. Choosing the right environment Because Web3 relies on distributed ledger technology (DLT), it’s beneficial to understand the benefits of building in that environment. DLT is known to create an environment that fosters transparency and traceability, while also increasing the speed of transactions (or in this case, Web searches) and keeping costs low. For a better understanding of the nitty-gritty of DLT, many Web3 project developers have assumed that a Blockchain is the only-and best- way to go, recommending the reading of the Ethereum and Bitcoin whitepapers. They explain the ins and outs of each respective platform and their various components. Additionally, every DLT is different and has its own rules and requirements. These differences can range from the primary program language used to specific standards developers must adhere to. Initially sticking to a single DLT environment, Blockchain or otherwise, can allow developers to gain more targeted understanding of the underlying technology. This can prevent spreading oneself too thin by trying to learn the many different nuances. Deciding on a development stack A development stack is an integral resource for any software developer, and it’s no different for Web3. A development stack is a plethora of tools that developers use to bring their projects to life. A Web3 stack is typically composed of a Web3 library, smart contracts, nodes, and wallets. Additionally, developers can utilize a purpose-built development stack like Radix to avoid having to find and curate a stack themselves. Deciding if you are going solo or joining someone else Learning the ins and out on your own is challenging in and of itself — but creating and implementing what you’ve learned is an entirely new ordeal. The DLT environment can be unforgiving for new and solo programmers. Not only does it cost tokens to upload code, but it can also be difficult (if not impossible) to edit it once it has been deployed. Fortunately, Web3 projects and startups are constantly looking for developers. Oftentimes, these companies are willing to take on and train new developers since the demand is so high. This can be a great way to gain experience and learn on the job. These opportunities can be found in a wide variety of places, including Twitter, Discord, and Web3 job boards. Projects will often post their openings on their social media accounts if they are actively looking. Even if a project isn’t seeking out candidates, there may still be an opportunity to join the team by engaging them on their Discord server. As with any creative project, when combining your efforts with others, many compromises are made. Some of your ideas may not be realized. If creative freedom and independence are important to you, then creating your own project will be a safer choice. A recent hackathon during the FooHack event with Redfoo demonstrates just how great collaboration can be. The team at the hackathon was able to put together a full program in a fraction of the time than could be achieved by going solo without guidance. Web3 development is the place to be While it may still be a relatively new space to be in, Web3 is the future. There are now more companies looking to hire developers than ever before. Having the resources and foundational knowledge are key to finding success in this burgeoning industry — regardless of whether you’re marketing yourself to employers or creating an independent Web3 project. Piers Ridyard is CEO at RDX Works. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,985
2,022
"Software may be eating the world, but low code could eat software | VentureBeat"
"https://venturebeat.com/programming-development/software-may-be-eating-the-world-but-low-code-could-eat-software"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Software may be eating the world, but low code could eat software Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Marc Andreesen famously claimed in 2011 that “software is eating the world” in an op-ed article in the Wall Street Journal. His point was that software was the new engine of value creation. “​​My own theory is that we are in the middle of a dramatic and broad technological and economic shift, in which software companies are poised to take over large swathes of the economy,” Andreesen wrote. The article details a variety of examples in which digital companies, such as Netflix, Amazon, Apple and Spotify, have achieved a dominant position powered by software and digital products. The article defines software rather loosely, asserting that companies use software to trade in digital assets and dramatically expand the use of data and automation are the new winners. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Andreesen was right. Software-powered companies have and are eating the world. Though, his analysis focuses on large companies winning considerable victories with industrial-scale software. The software he points to that ate the world was a product of elite engineering teams and layers upon layers of complex platforms. In my view, we are entering a new era in which software will continue to eat the world, but in a far broader and more distributed way. It won’t just be the most famous or largest companies that achieve digital victories. We will find that in almost every business, the use of software will boom to increase efficiency, bring new awareness and expand automation. This will never happen if this software can only be created by elite engineering teams. The way that software will eat the rest of the world will be through low-code and no-code methods, but that’s not all. Much of the software that Andreesen points to as having eaten the world will itself be eaten by low-code methods. In short, if software is eating the world, then low code is eating software. Let’s review what exactly I mean and explain why this is happening. Low-code basics Low code makes the process of creating applications much easier. It is important to remember that modern low-code systems are just this era’s model for the intelligent application of core concepts of computer science. If you’ve been around the enterprise software and computer science world for a while, you know that the idea of simplified coding that takes over the world of software development is not new. Domain-specific languages are one form of this idea. SAP created ABAP and Salesforce invented Apex as domain-specific languages to make it easier to code their applications and separate them from underlying implementation details. Going way back, so are fourth generation languages. Going even farther, we can point to IBM’s RPG as a form of low code. Low code, in simple terms, is the capability to build and automate applications of a certain type rapidly. No code is the ability to customize an application purely through configuration settings. The “code” in the term low code is the key to understanding its power. Unlike a traditional high-code language like Java or Python or C, in which you can almost code anything you want, in a low-code world, the code exists to provide just enough ability to adapt an application of a certain type. The “low” in the term suggests that the amount of coding to adapt an application should be small compared to the amount of code needed to implement the application in a high-code manner. The “low” also means simplicity, it is easier to use low-code methods. The “of a certain type” part of the definition is also important. Low-code development systems aren’t built to do anything. Low-code development environments focus on particular types of applications and provide building blocks that do much of the work to implement that type of application. Once low-code applications are created, they can be changed and adapted to ever-evolving requirements faster than high-code methods. Low-code applications also require less maintenance, meaning lower technical debt. Modern low-code applications created using platforms from companies like Appian are proven to be enterprise-grade in terms of scalability, reliability and performance. There is a tradeoff. Low-code applications are focused on creating specific types of applications. When a low-code platform matches your needs, then a much larger number of people can participate in creating, maintaining, and evolving applications. This is where the big win comes from, a topic I will return to in a minute. Expansion of services creates leverage Low-code development platforms are more relevant and powerful than ever because we live in a world that is full of abstractions and services. Low code allows us to access services and create new applications with much less effort. The most advanced low-code development platforms have a full stack of capabilities required for creating enterprise applications. For example, most low-code development platforms have a simplified way to define a user experience. This abstract definition is then rendered into user interfaces that are delivered on numerous devices. On a modern low-code platform, a developer can define one user experience (UX) using the abstraction and then find that the application will work on the web, on desktops, on tablets, and on mobile devices without any additional effort. The low-code applications have to live with the power of the abstractions the platforms provide. That’s the cost, but as the platforms have matured, that cost has become lower and lower. The collection of abstractions for UX, data, and process automation are extended by various types of application components for case management, legacy modernization, collaboration, and so on. Low-code also excels at orchestrating services from many systems to add higher levels of automation and process control. Ray Kurzweil points out in his explanation of the exponential growth of technology how acceleration takes place at faster and faster rates as more and more powerful services are orchestrated. (See this article on Technology Leverage for more detail.) Now that software-as-a-service (SaaS) tools have become widespread and API-enabled, a rich landscape of services exists. Even small or medium-sized companies have lots of SaaS applications that act as systems of record and perform essential transactional functions such as accepting or making payments. Low code unlocks the power of all of these services with much less effort than high-code approaches. The expanded services landscape also makes a much wider set of data available. Low-code applications can access and distill this data to create much more detailed models of business activity, which can be the foundation of better analytics and increased automation. For certain functions, low-code methods are also being used to create services that can be used by the platform or by external consumers. High code methods can always be used to create new services that can be plugged into the low-code environment. Low-code development platforms are constantly evolving. Process mining, conversational artificial intelligence (AI), AI and machine learning (ML) modeling, and new forms of data storage such as graph and document databases are showing up in low-code platforms. As time goes on, low-code development platforms will be more and more powerful. The superpower of low code: Increased productivity The fact that coding is simpler has several profound effects on productivity. Specifically, low-code development platforms: Expand the number of people who can code. This is a claim that must be made carefully. Low code doesn’t mean that everyone can now create advanced software. However, it does mean that people who could never create high-code apps can create simple low-code apps, and these can be hugely helpful. Improve productivity of advanced developers. Developers using low-code methods can get more done than using high-code methods for numerous types of applications. Reduce maintenance burden. Low-code software generally is easier to maintain over time than high-code methods because much of the complexity is managed by the platform. The simplified applications dramatically reduce technical debt. Improved user experience and satisfaction. Standards and design principles enforced by low-code platforms avoid many errors and provide a pleasing experience as well as make applications automatically work on a phone, a tablet, a laptop, or a desktop without modification. Better TCO and ROI. All of these improvements to productivity lead to better TCO and ROI for low-code applications. Now that low-code methods have become more powerful, increasingly low-code apps are being managed not like one-off spreadsheets but like the key software assets they are. Like other software assets, they are being created with test suites and source code management techniques, and advanced operational logging and monitoring. In other words, low-code apps have become real-software, not just departmental toys. As this maturity is recognized, more and more developers and enterprises will consider low-code platforms for their applications. Low code will eat high code The economics of the low-code development platforms will be one of the main engines driving their adoption. Low-code development will eat software because it will be the cost-effective and efficient way to create the applications the world needs. People with a need for an application will face the following choices: Build with high code methods Buy a product if one exists Build on a low-code development platform Buy a product built on a low-code development platform The difficulty of high code and the lack of fit for many products will drive people to low-code methods. Many of the low-code platforms now come with a huge number of components and templates to accelerate development. As low-code development platforms have matured and the number of services has grown, low code fits many more problems. Low code expands the pool of people that can solve them. The number of new components and techniques available through low-code development platforms such as process mining, conversational AI, and others mentioned above, continues to grow. Low-code platforms will become a safe and low-cost way to experiment with new technologies. Using low code is a tradeoff. Developers accept the limits of the environment, hoping that the simplified coding methods still allow them to create the application they require. Low-code systems can do much more than they could in the past. Low code will eat software because the trade-off becomes less and less painful as low-code systems become more and more powerful. Tarun Khatri is the cofounder, executive director and head of the Appian practice at Xebia. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,986
2,023
"Next wave of DeFi will be driven by decentralized identity solutions | VentureBeat"
"https://venturebeat.com/data-infrastructure/next-wave-of-defi-will-be-driven-by-decentralized-identity-solutions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Next wave of DeFi will be driven by decentralized identity solutions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, decentralized finance (DeFi) is still something of a “wild west.” With many different players, each with their own claims and ambitions, there is no obvious law of the land. Unfortunately, this has resulted in some users being dealt a bad hand after deciding to experiment with the ecosystem. Stories of scams and rug pulls are still common, and algorithmic protocols coming undone by negative market conditions disturb users’ trust. DeFi can seem unsafe and confusing for many users, even when projects and the teams behind them have the best of intentions. It doesn’t help that regulators are, in many jurisdictions, dragging their feet on clear rules or enforcement for the sector. Although it took years for the first signs of legislation to emerge, the growth of DeFi has finally drawn the attention of lawmakers across the world. However, the jury is still out on how strict or flexible the laws will be. The combination of risky services and an unregulated environment has understandably kept many suspicious of the crypto community. Both retail investors and institutions are wary of DeFi and don’t fully understand it. The question of the hour is, when and how will we get to a point where DeFi can be embraced by people other than Degens? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! One step that could be massive for appeasing regulators and would-be investors is the introduction of identity solutions. Various actors can be tracked within the DeFi space using these solutions. Crypto purists and privacy advocates may frown at the idea, but a solution that addresses regulators’ requirements, alleviates investors’ concerns and doesn’t infringe on individual rights is closer than most think. Enter decentralized IDs The very technology that DeFi is built upon also offers the solution to the current roadblock. That solution comes in the form of decentralized identities, or DIDs. By leveraging blockchains , smart contracts and non-fungible tokens (NFTs) , DIDs can offer accurate information to lawmakers while preserving users’ sovereignty and privacy. This is possible owing to a few different aspects of the crypto infrastructure, with NFTs delivering particular value. An NFT acts as an asset that can have any type of data encoded into it and is verifiably unique from all other assets, complete with its own history. Because of the underlying decentralized protocols, nobody can fake or alter an NFT. For a true digital identity, more is understandably needed. There also needs to be accountability and certainty surrounding the ownership of DIDs. To this end, verification of one’s physical identity can be linked to one’s DID. There are multiple ways this could be done, including biometric data, explicitly verifiable real world documents, or similar confirmations. By linking all this information together in an NFT, an unfalsifiable profile can be created. Power to the user Privacy advocates may shun the idea for being very strict and encompassing. After all, an immutable record of a person’s data being recorded on a public blockchain forever doesn’t sound all that private. This is where the next benefit of DIDs come into play, in conjunction with zero-knowledge proof (ZKP) technology. Information can be verified once by an independent party and then used to confirm someone’s credentials using ZKP technology. That results in an individual being able to prove their access, records or history without necessarily revealing their name or other identifying information to the verifier. In this model, individuals would have complete control over their own data and may grant permissions to verifiers on what can be seen and when. IDs would no longer need to be an open book for businesses and governments to use as they please. While these goals are important to retaining individual rights, they also carry with them practical use cases. Imagine someone being able to pick up a prescription without having to show the pharmacist anything and, instead, simply scanning a QR code on their phone. Their doctor had embedded the prescription requirements into their DID and it could even expire after the appropriate number of refills. Alternatively, imagine a bank customer applying for a loan without having to reveal the actual balance of their accounts. Instead, users could simply provide proof that confirms they have the predetermined minimum account value that qualifies them for the loan. How this opens up the DeFi future Bringing this back to DeFi, it becomes increasingly clear how DIDs can bring accountability and trust into this realm without undermining decentralization and privacy. These profiles can be utilized by customers as well as providers, creating knowable entities on decentralized platforms without actually revealing who the customer is. For example, DIDs with appropriate verifications may be required for accessing certain features or dApps, without the needed service needing to see the identity of the holder. Speaking of credentials, DeFi services could also give a form of “badge” to DID profiles to indicate accomplishments, merits or behavior in general. These could be non-transferable tokens that indicate certain metrics and stay with that ID forever, also known as “soulbound tokens.” For example, if a given user tried to perform an attack on an exchange in the past, their DID could be sent a token that indicates malicious behavior for exchanges. On the other side of things, longstanding and reliable liquidity providers could be given a similar identifier, giving those IDs a VIP status even if they join new platforms. DeFi services themselves can have their own DIDs that work in a similar way, instantly and irreversibly acting as a complete history and document of reputation. Once implemented, such a system would discourage bad behavior and result in meaningful ramifications for those who engage with it. All of this could be done without invasive surveillance or the complete knowledge of the holder. Enabling trust This approach could open the door for everyone, from individual investors to major corporations, to join the DeFi revolution. DIDs could be designed to always stay in line with legislation in a given jurisdiction, meeting the regulators halfway and preventing the regulations from being broken. Customers could trust their services and vice versa, making all forms of finance and commerce function much more smoothly and with a significant reduction in fraud. Best of all, average citizens could actually have control of all of their own information, protecting them from malicious activity. What needs to be recognized is that this isn’t just a great theory; it’s already a reality. Decentralized protocols have been developed to allow for exactly these types of IDs and in some industries, they are already being used. Soon, others will start rolling out similar solutions for their customers, bringing greater security and peace of mind for everyone. This is the last puzzle piece that has been holding back mass adoption in DeFi. While it’s true that regulators’ actions will play their part in helping risk-averse investors take the plunge into this new realm, their actions alone will not be enough. That’s because accountability needs to be balanced with freedom. Decentralized Identification provides what is needed today and long into the future of DeFi, wherever this exciting new industry takes us. Amit Chaudhary is head of DeFi Research at Polygon. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,987
2,023
"Web3 and Web5: A tale of technological determination | VentureBeat"
"https://venturebeat.com/programming-development/web3-and-web5-a-tale-of-technological-determination"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Web3 and Web5: A tale of technological determination Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The chicken and egg question applies far beyond poultry, to include the relationship between a society and the technology it produces. Or is it vice versa? One could ask, is society a product of the very technology it comes in touch with every day? Just as wheels and steam engines changed how people move, which had a profound impact on entire economies and the global intercultural exchange, the internet and apps now determine the way we do a myriad of other things, thus also determining our culture and history. The idea in play here is aptly named “technological determinism,” and it has been very much present in some of the recent debates around Web3 and blockchain. The pendulum swings Web3, a more decentralized internet with user-owned data and digital assets, is what most in the blockchain space aspire to. In a few ways, it is still an idea at this stage — and some think this idea has grown outdated before even coming to fruition. Enter Web5 , the project of Twitter’s former CEO Jack Dorsey, an idea so forward-looking that it skips Web4 on its path. Dorsey has long despised the direction the Web3 industry is taking, once famously declaring that this new-generation network will be pretty much like Web2 , but with VC funds at the rudder. While the industry indeed isn’t perfect — nothing is, let’s be honest — such a take might prompt the more spiteful to question whether Web5 would then be Web3 with the VCs overthrown by Dorsey. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In essence, both of these questions also betray a certain kind of determinism. In asserting that “you won’t own Web3,” Dorsey assumes that today’s societal, political and economic relationships will determine our future technology. The same applies to a similar question about Web5, as it relies on the same intellectual foundation. Blockchain is a curious case study from the determinist perspective, especially when you look at its origins. While a lot of its underlying concepts and algorithms date back to the previous century, the work spearheading its use for trustless peer-to-peer exchanges of value — Satoshi Nakamoto’s famous Bitcoin whitepaper — came out in late 2008, at a time of upheaval in the financial world. The zeitgeist exposed the many flaws of the financial system as it was back then; it was the cause, and Bitcoin was the effect, born out of the socioeconomic state of the world. Now that the jack is out of the box, the pendulum swings back. Once a product of a society in search of a more effective financial system, cryptocurrencies are now having their own impact on the society as their adoption and popularity grow. And while this process may indeed lend itself to a determinist review, the prism of today may not be best-suited for looking into tomorrow. Blockchain is what you make of it At its core, blockchain is a permissionless, peer-to-peer technology running on community support and participation. As such, it defies pessimism through its sheer design: There are no kings in a peer-to-peer network, only early investors picking up the plentiful gains the same way they would with any other industry or individual company that shoots for the sky. Despite being more centralized now that it was at its inception, Bitcoin still has no central bank or authority. Its unintended centralization is a natural to any moving system, and its rules of the game are laid out in its protocol, enabling anyone who thinks they can do better to fork it and take their fair stab at it. There are other ways that blockchain could drive our society in terms of our larger governance and other frameworks. The blockchain space lives and breathes the community spirit, with individual builders and enthusiasts rallying around the projects and causes they like. This may point to a gradual, larger societal transition to decentralization, with local communities gaining more autonomy in the face of centralized powers. By the same account, the rise of DAOs hints at the way we could be doing business in the future. We will scrap the traditional management verticals and revenue distribution models in favor of a system where every project participant has a stake and a say in it and enjoys more individual ownership. That said, the same way automation’s impact on the job market depends on the way companies implement it, blockchain’s societal impact will depend on more than the technology itself. As we know from TradFi, money loves company, and this law of gravity is already very much present in the crypto space, from the outsized impact that whales have on market fluctuations to the creeping centralization of Bitcoin mining. Pitfalls, present and future In some ways, with its code-is-law maxim, the blockchain space runs as codified, executable capitalism, and for all the good it can do, there are pitfalls for the community to be wary of. Take the tale of the BadgerDAO hack, where a malicious actor used a flash loan to hijack the governance mechanism and scoop out the project’s treasury. Imagine a real election being hijacked this way, not even necessarily via a loan, but through the sheer financial muscle of an old-era whale; or a corporation buying out the governance in a DAO that’s tasked with keeping terrorist communications off the web to silence its rivals. Something similar may or may not be happening in the Web2 space already , and blockchain could just set up a more convenient framework for that. Through its design and its underlying values, blockchain has the capability to fundamentally transform our society for the better, giving more ownership and autonomy to everyday people at the expense of centralized powers. As we work toward that, though, whatever the number after “Web” is, it’s important to not allow the flaws of today’s financial, socioeconomic, political and other systems to move into the new one. Leonard Dorlöchter is an entrepreneur who co-founded peaq DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,988
2,023
"Hugging Face and ServiceNow open up generative AI for coding with StarCoder | VentureBeat"
"https://venturebeat.com/ai/hugging-face-and-servicenow-open-up-generative-ai-for-coding-with-starcoder"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hugging Face and ServiceNow open up generative AI for coding with StarCoder Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The landscape for generative AI for code generation got a bit more crowded today with the launch of the new StarCoder large language model (LLM). StarCoder is part of the BigCode Project , a joint effort of ServiceNow and Hugging Face. BigCode was originally announced in September 2022 as an effort to build out an open community around code generation tools for AI. The StarCoder LLM is a 15 billion parameter model that has been trained on source code that was permissively licensed and available on GitHub. The model has been trained on more than 80 programming languages, although it has a particular strength with the popular Python programming language that is widely used for data science and machine learning (ML). Market heating up The effort to build an open generative AI code generation tool brings new competition to OpenAI’s Codex, which powers the GitHub co-pilot service, as well as efforts from other vendors including Amazon’s CodeWhisper tool. Both OpenAI and Amazon tools are based on proprietary code, whereas StarCoder is being made available under an Open Responsible AI Licenses (OpenRAIL) license. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “There are powerful code models out there, but they are all closed source, nobody knows exactly how to train them,” Leandro von Werra, ML engineer at Hugging Face and co‑lead of BigCode, told VentureBeat. Von Werra added that the idea behind BigCode and StarCoder is to build powerful code generation models in the open. While the effort is led by Hugging Face and Service now, he emphasized that there is an active community of approximately 600 people in the community that are contributing to the project’s success. BigCode is spiritual successor of BigScience The BigCode effort isn’t the first time that HuggingFace has helped to build a community to open up AI development. Von Werra called BigCode the ‘spiritual successor’ of the BigScience effort, which got started in 2021. In 2022, the BigScience Large Open-science Open-access Multilingual Language Model ( BLOOM ) was released, providing a multi-language text generation model intended to be an open alternative to OpenAI’s GPT-3. BigCode has had a few iterative steps on the path toward the release of StarCoder. In October 2022, the project announced “The Stack,” a collection of permissively licensed code collected from GitHub as a training data set for LLM code generation. In December 2022, BigCode released its first ‘gift’ with SantaCoder, a precursor model to StarCoder trained on a smaller subset of data and limited to Python, Java and JavaScript programming languages. With StarCoder, the project is providing a fully-featured code generation tool that spans 80 languages. Harm de Vries, lead of the LLM lab at ServiceNow Research and co‑lead of BigCode, explained to VentureBeat that StarCoder can be used in a variety of scenarios. For example, he demonstrated how StarCoder can be used as a coding assistant, providing direction on how to modify existing code or create new code. The StarCoder LLM can run on its own as a text to code generation tool and it can also be integrated via a plugin to be used with popular development tools including Microsoft VS Code. Von Werra noted that StarCoder can also understand and make code changes. For example, a user can use a text prompt such as ‘I want to fix the bug in this function’ and the LLM will do just that. Why explainable AI needs an open license A critical aspect of StarCoder and the BigCode effort in general is that the technologies are all available under an open license. A key challenge for organizations deploying AI today is the need for explainable AI, where it is possible to understand how and why a model made certain choices and decisions. A related challenge is the need to ensure that AI is used responsibly and doesn’t cause harm to people via toxic content or malware. To help solve those thorny issues, BigCode is using OpenRail licenses and for StarCoder in particular, the Code Open RAIL‑M license. “We know these models are very powerful and we want to make sure that they’re used for good use cases and not for use cases which will have bad implications,” said De Vries. The Code Open RAIL‑M license allows users to see the code inside the model with a restrictions intended to prevent code from being misused — such as using it to generate ransomware or a social engineering attack. “It’s completely open like an open source license,” said De Vries. “It just comes with the restrictions that make sure we stick to our responsible AI principles.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,989
2,022
"Why observability data is crucial for digital transformation | VentureBeat"
"https://venturebeat.com/2022/05/17/why-observability-data-is-crucial-for-digital-transformation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Why observability data is crucial for digital transformation Share on Facebook Share on X Share on LinkedIn Presented by Era Software In 2022, observability data volumes could increase between two to five times, according to Era Software’s 2022 State of Observability and Log Management report. That means companies could be looking at exabytes of data to manage in five years. Current tools aren’t up to the task, 79% of IT practitioners say, and costs will skyrocket in 2022 if existing tools don’t evolve. But storage isn’t the only issue; 96% say that even more critical is finding efficient ways to apply that data to solving business problems – and 100% say their organizations would benefit from innovation in observability. “It’s becoming harder and harder for engineering and technology organizations to figure out which pieces of this growing pile of log data are most important,” says Todd Persen, CEO and co-founder, Era Software. “As those data volumes have gotten bigger than humans can even review or grasp, the tools to store that data have started to break down.” The problem with traditional monitoring As companies move into the digital age, they’re also moving from traditional two-tier application architecture to multi-tier application architecture across multiple cloud environments and managed services. The IT team doesn’t have direct control over these services and instead is dependent on what the cloud provider reports. Even though the provider is responsible for the performance of the company’s application, they might not understand or have a view of the underlying technology and how it’s performing. Without observability, the acceleration of digital transformation could be a risky journey, resulting in poorly performing services that will ultimately impact both the customer experience and the bottom line. But while observability is a straightforward goal, and many organizations realize that existing monitoring tools cannot keep up with the massive data volumes created by modern cloud environments, they’re looking for new ways to efficiently extract critical insights from observability data. “It’s not even just the number of systems — it’s that the operational modes of these systems have become so complex that even if you’re the developer who built the system, you can find yourself at an impasse,” Persen says. “How do you gain insight into what it’s actually doing as a dynamic system, and what metrics matter? What do you need to look at to tell why a system is failing?” Security also remains a considerable challenge since security organizations need to analyze massive amounts of log data to identify potential security incidents and for security audits and compliance reporting. However, many organizations are forced to limit the number of logs they ingest or store because it’s too expensive to keep them all. As a result of this forced picking and choosing, many security leaders say they don’t have the logs they need to troubleshoot security incidents, which negatively impacts response efforts and increases vulnerability. Why observability is essential Observability bridges the gap between legacy technology and modern approaches to data management. It’s an evolution of traditional monitoring towards understanding deep insights from analyzing high volumes of logs, metrics, and traces collected from many modern cloud environments. It ensures the delivery of reliable digital services in the face of the increasing complexity of cloud services. And it’s more and more necessary for any company that’s embarking on digital transformation. “People realize that as they’re going on this digital transformation journey, they’re adopting more tools and more products and adopting more scope and more things they need to monitor and observe,” Persen says. “Observability is an enabler because it lets people have the confidence that these new systems are doing what they want. But at the same time, it’s become table stakes for digital transformation — having a good observability story is an essential part of success.” The State of Observability and Log Management report also revealed that IT departments are erasing data to manage the cost of collecting and storing log data with more traditional tools. But ditching the data means losing critical information needed later for forensics and security analysis. “Imagine you have an attack and don’t have the data to figure out where it’s coming from – you’re exposing your organization to risk,” Persen says. “Not only are you exposing yourself, but if you’re not properly logging everything and potentially masking personally identifiable information, you can accidentally expose that PII.” Key observability tools While the cloud offers unprecedented efficiency, unlocks innovation, and slashes costs, it’s also made it a lot more complicated to figure out how to execute cloud digital transformation the right way. How do you build a business on top of these complex systems? “At the end of the day, most companies are not in the business of managing or dealing with infrastructure. They’re in the business of providing a core service,” Persen says. “How do they stay effective while going down this relatively new and uncharted course? It’s hard, and we see our role as trying to find a way to provide a consistent set of tools in the observability space that can fit anywhere that the customer needs to go.” Platforms like Era Software Observability Data Management , which process data between different sources and destinations at scale, plus cost-effectively store and optimize it for analysis, are the way of the future. IT and security teams should look for a platform that can gain insights from raw data, reducing TCO for existing observability and log management solutions while preserving information in low-cost object storage. This data can be used for forensics, auditing, baselining, and seasonal trends analyses. Persen also notes the importance of a platform that’s not dependent on any particular architecture but has the flexibility to function in systems from traditional on-prem to hybrid cloud to cloud. And one of the biggest benefits overall of observability data platforms like these is significant cost savings due to the efficiency the technology brings to observability workloads. When you consider the budget dedicated to log management, reducing costs means having the option to store more data and improve visibility. This leads to more reliable services, freeing up resources to invest in innovation. There’s a broader benefit too: By centralizing everything and removing artificial limits on who can access what amount of data or how much should be logged, data is democratized for the entire organization, allowing everyone to see the full view and derive insights from it. “Data democratization, providing access to the entire organization, allows everyone to get big business benefits,” Persen says. “It’s not just data made available to ITOps for troubleshooting. You see everything. You see customer interactions. You see information about application performance. You see the trends in your customers. It’s a gold mine of data for the entire organization.” Dig deeper: Read the 2022 State of Observability and Log Management report here. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,990
2,022
"How Web3 and Cloud3 will power collaborative problem-solving and a stronger workforce | VentureBeat"
"https://venturebeat.com/cloud/how-web3-and-cloud3-will-power-collaborative-problem-solving-and-a-stronger-workforce"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Web3 and Cloud3 will power collaborative problem-solving and a stronger workforce Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The onset of the COVID-19 pandemic propelled cloud adoption at an unprecedented rate. The benefits of cloud computing combined with the promises Web3 holds for such things as blockchain-backed decentralization, scalability and increased ownership for everyday users became clearer when the world shut down in March 2020. Now, Web3’s lesser-known but important counterpart, Cloud3, is also beginning to gain traction. Executives like Salesforce CEO Marc Bennioff are already mapping how their companies will adopt the new iteration of cloud computing — the core of which is built around working from anywhere — further supporting the workforce shift. “We’re in a new world. This is a huge opportunity to create and extend and complement our platform. We realized for each and every one of our clouds, it was time to transform to become a work-from-anywhere environment. We ultimately are focused on delivering the operating system for Cloud3,” Bennioff said in a company press release earlier this year. Cloud3 and Web3 may sound like the latest tech buzzwords, but according to industry experts, the two are on the rise, and enterprise executives and community leaders need to pay attention or risk getting left behind. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The promises of Web3 Before there was a third iteration of anything, there had to, of course, be a first and second to lay the foundation. The original iteration of the World Wide Web created by Tim Berners-Lee in 1990. It focused on HTML, specification of URLs and hypertext transfer protocol (HTTP) commands. While Web 2.0 is complex, it can be simplistically defined as what we know the internet to be today, including access to the web via Wi-Fi, smartphones and the rise of social media usage. Web3’s features ensure more democratization over the web. With blockchain-backed decentralization and scalability, there will be less oversight, which may, of course, lead to bad actors, but could also pave the way for underrepresented people, communities and companies to gain more control. “It always seems that POC [people of color] create the culture of communities or companies, but end up benefiting last from it. And with this kind of new paradigm of power Web3 can provide, we’re able to finally take ownership of our communities, said Cheryl Campos, head of venture growth and partnerships at Republic. “We can use Web3 to more easily and equally share the wealth with others and make sure we are sharing the profits with others. What is so exciting is that Web3 allows for that through non-fungible tokens ( NFTs) and decentralized autonomous organizations (DAOs)and even with new DeFi (decentralized finance) products coming out that focus on supporting communities or loaning to others. That is not just the wealth gap, but also the ownership gap, that Web3 helps bring back to the hands of communities and the people within in them in a meaningful way,” Campos added. Founder and partner of the Open Web Collective , Mildred “Mimi” Idada, agrees: “The Web3 ethos can bring in more diversity, not just in terms of race, nationality or gender, but also diversity in backgrounds, skills and perspectives.” “Diverse skill sets and perspectives are also necessary for innovation in the Web3 space. We need not only technical talent, such as developers, but also creatives, lawyers, bankers and community builders,” Idada said. That said, innovation and benefits Web3 can provide to communities, businesses and investors alike won’t happen overnight. According to Greg Isenberg, cofounder and CEO of Late Checkout, a company that designs, creates and acquires Web3 and community-based technology businesses, Web3 still has a ways to go until the full breadth of its benefits is visible, but it’s important for executives and community leaders to pay attention now. “Web3 doesn’t, and can’t, work unless the UX [user experience] is very simple — so much so that your grandmother could buy a digital asset like an NFT to have ownership in Web3. But to do that, we need a lot of infrastructure in place,” Isenberg said. Isenberg said he has seen several companies make great strides in UX with a proactive eye toward the rise of Web3 like Rainbow , the Ethereum wallet that allows you to manage many digital assets in one place. Isenberg said he expects other companies across industries to soon follow suit. He also echoes Campos’ and Idada’s excitement and predictions regarding Web3, citing the impressive outpouring of cryptocurrency donations made to Ukraine totaling around $55 million in just days. It’s the Web3 infrastructure that these platforms and currencies like crypto are beginning to build that creates the scalability of donations like this. “What gets me excited about Web3 in general is the coordination it brings to capital to address important things. That [was possible] because of the web infrastructure that was built on top of it, Isenberg said. “I expect social causes to be a huge part of popular Web3 projects going forward. Now I’m thinking, ‘What else can this help change?’ It’s interesting because there’s a perception that Web3 is bad for the environment, for example, but I actually think that a large part of solving the world’s problems will stem from coordinating people and capital and Web3 has already proven to be really good at that.” Standing on Cloud3 Part of the needed infrastructure to support Web3’s promise to coordinate and help solve community and world problems efficiently and at scale will require Cloud3’s advanced capabilities — which assure secure access to collaborative tools from anywhere. The evolution of cloud technology began with large IT operations that were disrupted by the software-as-a-service boom. Next came infrastructure-as-a-service and platform-as-a-service technologies, further relieving pressures placed on IT teams and developers alike. Now, the demand for everything the prior cloud iterations provide is just as fierce as the demand from companies and the public alike to access these tools from wherever, whenever while simultaneously having strong IT security as a backbone — from wherever, whenever. “Cloud3 will empower businesses to leverage cloud-based experience platforms as a toolkit to seamlessly compose personalized communication experiences,” said Steve Forcum Cloud3 expert and director and chief evangelist for marketing at Avaya. A report from the health information technology and clinical research company, Iqvia , underscores that “emerging Cloud3 technologies will disrupt application development in organizations across all industries. Companies in the life sciences and financial industries, in particular, are well-positioned to leverage Cloud3 to differentiate themselves by applying artificial intelligence to big data.” Cloud3’s emergence will also transform how businesses are run and how tools and information are supported and accessed to match the pace and style of life that the world has shifted to post-pandemic. “Rather than businesses focusing on moving to the cloud, [with Cloud3] they’ll be forced to think of ways to transform within the cloud. With this comes innovation and new, cloud-based technologies. Disruptive technology should not require disruption to your business,” Forcum said. “A converged platform approach with composability at its core is malleable in nature, adjusting to the organization’s business processes, versus forcing processes to compromise around the limitations of a cloud platform or app.” Though intriguing promises and benefits stem from both the emergence of Web3 and Cloud3, there are concerns where they overlap. “A drawback we do see with [the overlap of the] decentralized web [Web3] and Cloud3, is more the industry recognizing that while there are similarities, these are also two very different spaces with very different mechanisms and tools to achieve their goals,” said Idada. “Nonetheless, hardware, computation power and cloud computing will be key pieces to the next phase of the web. Improved and enhanced capabilities will change how everyday apps operate and what is possible to meet our changing faster pace and on the go lifestyles.” What’s next for Web3 and Cloud3? As for what the future holds as innovation increases and cloud adoption accelerates, pay attention or risk getting left behind is the consensus from experts. Isenberg predicts that moving closer to the fully fleshed out iterations of both Web3 and Cloud3 that we may see more legacy companies begin to adopt them and make moves in the space, but that along with it, particularly for Web3, we may also see many of those companies fail. “We’ll likely see legacy companies embrace Web3 and it’s probably not going to go very well for many of them,” he said. “I think you’re going to see a small percentage, maybe 1% to 5%, embrace it really, really well and become category leaders among crypto data brands while others struggle to find their place.” “The future of work is remote. So, you have to make sure that there is infrastructure that will allow for this, or otherwise, you will not retain or get the best talent right for your operations. And more than ever, it has been clear that companies that embrace this Web3 space are more likely to attract younger talent and folks that are bullish on the space,” Campos added. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,991
2,017
"The big opportunities in serverless computing | VentureBeat"
"https://venturebeat.com/cloud/the-big-opportunities-in-serverless-computing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The big opportunities in serverless computing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Serverless computing is a type of cloud service where the hosting provider allocates adequate resources for you on the fly rather than making you pay for dedicated servers or capacity in advance. It’s a major technological breakthrough, and we expect to see a significant inflection point soon in this nascent market. Serverless computing is the next phase in the evolution of IaaS (Infrastructure-as-a-Service). It completely abstracts the underlying infrastructure from developers and essentially virtualizes runtime and operational management. Oftentimes called FaaS (Function-as-a-Service), serverless architecture lets you execute a given task without worrying about servers, virtual machines, or the underlying compute resources. There are a few clear advantages in the adoption of serverless technology: Agility – Since developers are not deploying, managing, or scaling servers when using serverless, organizations are able to abandon infrastructure administration. This dramatically decreases operational overhead. Serverless is highly compatible with a microservices architecture, which entails significant agility benefits as well. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Scalability – A big advantage of serverless is the scalability it enables, as upgrading and adding compute resources is no longer reliant on the DevOps team. Serverless applications can quickly, seamlessly, and automatically scale up to accommodate spikes in traffic; conversely, these applications also automatically scale down when there are fewer concurrent users. Billing model – When using serverless platforms, you pay only for the compute that you need. Serverless architecture introduces a true pay-per-usage model, where customers pay only when a function is executed. Serverless’ billing model makes it ideal for microservices that have small load requirements and for applications with a “spikey” traffic profile. Unlike in traditional environments, there is no need to pay for VMs or containers that often sit idle. Security – Serverless architecture provides security benefits. Because the organization is no longer managing servers, DDoS attacks are considerably less threatening, and the automatic scaling capabilities of the serverless functions help mitigate risk from this type of attack. Serverless also cripples attackers from targeting OS vulnerabilities and installing malicious software on the company’s servers. Why serverless is the next big thing Serverless computing isn’t just a niche solution for cutting-edge tech organizations. It is transforming the way developers deploy and manage complex software, and it has vast implications on how organizations deliver their applications. One interesting area of relevance is IoT applications, which involve billions of end-devices simultaneously using compute resources. With its cost savings and scaling efficiencies, serverless will be key to the mass adoption of such technologies. Amazon, Google, Microsoft, and IBM already offer serverless platforms. As with many other cloud-related capabilities, Amazon was the pioneer, introducing AWS Lambda back in 2014, and it appears the company is bullish on the space. In April, at ServerlessConf in Austin, Tim Wagner, GM of AWS Lambda services, shared that AWS is seeing an increasing trend of enterprises adopting AWS Lambda services. Other cloud vendors are seeing the future through the same lens. For example, Jason McGee, VP & CTO of IBM Cloud, has said IBM analysts predict the FaaS market will grow 7-10x by 2021. This statement is supported by a recent Markets and Markets report, which predicts the serverless market will grow from a $1.88 billion market in 2016 to $7.72 billion by 2021. The public declarations we’ve heard from these vendors imply they are heavily invested in serverless, but this is not just an area of interest for prominent cloud vendors; we are already witnessing an entire ecosystem of startups emerging. The serverless ecosystem, shown above, is growing in two areas: Platforms – Alongside the big cloud vendors, an abundance of platforms and open source frameworks are emerging to give developers the ability to host, deploy, and run their serverless applications. One example is Iron.io , which to date has raised $17 million, offering a serverless app platform where businesses can run applications on a public cloud, a private cloud, and even on premise. Another interesting player is Auth0’s Webtask. The Identity-as-a-Service company offers a platform that supports a variety of integrations and allows developers to build applications without thinking about infrastructure. Technology enablers – These solutions are enabling the adoption of serverless platforms and frameworks by providing easier usage and integration with serverless environments. Enablers include development and monitoring tools, as well as dedicated cybersecurity solutions. An example of an interesting development tool is the open source solution Serverless , a provider-agnostic framework that allows developers to build, deploy, and operate serverless architectures on top of all leading cloud providers. Stackery , an operation management platform, is another serverless technology adoption enabler. It offers infrastructure provisioning to customers developing serverless applications and enables visibility and control throughout the serverless application management lifecycle. Another great mention is IOpipe , which provides tools for monitoring and debugging the performance of serverless applications. Though we do, indeed, see security benefits in adopting serverless architecture, as happens with all emerging technologies, new security vulnerabilities will arise and need to be addressed. Twistlock is a growing company in this space. The cloud native security company, which has raised $30 million, offers security solutions for serverless applications using machine learning and advanced threat intelligence techniques. [Disclosure: Our firm is a seed investor in Twistlock.] Early adopters of serverless With a serverless approach, a company’s developers can focus more on writing code than on managing the operational tasks of an application. Netflix is a great example. Just imagine the infrastructure required to serve over 100 million subscribers around the globe, the cost of storage that goes with that, and the management of that scale of compute resources. Netflix is a known cloud user, and in 2016, the company announced it had completed its migration; the company is now 100 percent cloud based. Netflix is an outspoken advocate for AWS Lambda and is leveraging serverless technology for delivering media files, backups, instance deployments, and monitoring solutions. Other prominent organizations such as Expedia, Coca Cola, and Adobe have also joined the serverless wave. The big opportunities Serverless is not operations-less. Operations is not just managing and scaling servers; it is monitoring, packaging, securing, deploying, and much more. While the context above outlines a very exciting space with many advantages, serverless is still in its infancy, and it presents some inherent challenges. These challenges present great opportunities for startups to build new and exciting solutions — for example, to innovate and address the following issues: Lack of tooling – Monitoring, logging, developing, and debugging tools are still non-existent or immature. Vendor lock-in – Serverless features differ among cloud vendors. In addition, each vendor has its own flavor of integration points, configuration, etc. In order to switch vendors, customers will probably need to change their code, their operational tools, and maybe even their software architecture. Performance – Service level agreements do not guarantee performance, and functions may take a long time to respond, especially in cases where it has been a while since their last invocation. This can be a deal breaker for many applications. Serverless is already being adopted by established corporations around the globe, and it as a space that will provide many interesting investment opportunities. Expect to see the “State of Serverless” map above to expand significantly in the years to come, pushing a new wave of innovation. Yoav Leitersdorf and Ofer Schreiber are Partners at YL Ventures. Iren Reznikov is an Associate with the firm, and Idan Ninyo is the firm’s Engineer in Residence. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,992
2,022
"New Kubernetes 1.26 release boosts security, storage, teases dynamic resource allocation | VentureBeat"
"https://venturebeat.com/data-infrastructure/new-kubernetes-1-26-release-boosts-security-storage-teases-dynamic-resource-allocation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages New Kubernetes 1.26 release boosts security, storage, teases dynamic resource allocation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In the cloud-native space, where applications are purpose built and delivered to run in the cloud, one technology in particular rises above all others — Kubernetes. Kubernetes is an open-source container orchestration system, originally developed by Google in 2014. Since 2015, Kubernetes has been developed under the governance of the Cloud Native Computing Foundation (CNCF), which is part of the Linux Foundation and benefits from the support of thousands of developers and hundreds supporting organizations. In 2022, all the major public cloud providers use Kubernetes, including Microsoft Azure’s Managed Kubernetes Service (AKS), Google Kubernetes Engine (GKE ) service and the Amazon Elastic Kubernetes Service (EKS ). Kubernetes also benefits from the support of numerous vendor distributions, including Red Hat’s OpenShift, Canonical Kubernetes and the SUSE Rancher Kubernetes Engine (RKE). Sitting upstream from all the cloud and software vendors’ efforts is the open-source project that is being updated today to version 1.26. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The new Kubernetes 1.26 release integrates new security, storage, container registry and performance capabilities. A total of 6,877 individuals representing 976 different companies contributed to the release. One of the biggest improvements in the 1.26 release isn’t to be found in any one piece of code, but rather in how the project is managed. All new features and updates are developed with an approach known as Kubernetes Enhancement Proposals (KEPs). Prior to the 1.26 release, all the proposed enhancements for a given release were tracked in a simple spreadsheet. With the new release, there is a new project enhancement dashboard for tracking features. “Previously we had a spreadsheet for tracking, which was terrible, it had a lot of custom optimizations to it and it was broken most of the time,” Leonard Pahlke, Kubernetes 1.26 release lead, told VentureBeat. “With the new system it’s way better.” Security takes center stage in Kubernetes 1.26 One of the big areas of improvement for release 1.26 is in security. Version 1.26 advances the digital signing of code with KEP-3031 , which outlines how the security capability should be implemented. Digital signing helps to improve the authenticity of code as well as helping to provide a chain of trust, which is critical for the enablement of secured Software Bill of Materials (SBOMs ). SBOMs have become an increasingly important aspect of the software supply chain for both open-source and proprietary software. The Kubernetes project uses open-source cosign technology, which is part of the open-source sigstore initiative backed by technology vendor Chainguard. “We are moving the Kubernetes Enhancement Proposal (KEP) [3031] to beta, further symbolizing that all the work we have been planning to sign with sigstore is now complete,” Adolfo García Veytia, technical lead, Kubernetes SIG release, and software engineer at Chainguard, told VentureBeat. “Completing this KEP means that all software artifacts we build will now be signed, not just the container images. And I cannot underscore the significance of this milestone and the security benefits it will bring for developers using Kubernetes.” The other noteworthy security enhancement that lands in version 1.26 is support for Windows privileged containers with KEP-1981 , which has been in progress for nearly two years. Kubernetes supports both Linux and Microsoft Windows, though there isn’t complete feature parity across the two operating systems. A privileged container is able to have more access to multiple devices on a Kubernetes host than a default container. Previously Kubernetes only supported Linux privileged containers. Dynamic resource allocation is coming One of the newest pieces of the version 1.26 update is an alpha feature tracked in KEP-3063 for dynamic resource allocation. While Kubernetes first became popular as a way to run workloads in the public cloud, in recent years it has also been deployed on-premises as well as in edge computing environments, which is where dynamic resource allocation will be a big boost. “Dynamic resource allocation basically adds a new interface with a new API, where you can more easily connect GPUs and other resources,” Pahlke said. “This enables new features for edge computing.” With the release of version 1.26, the focus now turns to the next update. There are typically three Kubernetes releases in each year; the next major update is expected to be at the end of April 2023. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,993
2,023
"Red Hat gives an ARM up to OpenShift Kubernetes operations | VentureBeat"
"https://venturebeat.com/data-infrastructure/red-hat-gives-an-arm-up-to-openshift-kubernetes-operations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Red Hat gives an ARM up to OpenShift Kubernetes operations Share on Facebook Share on X Share on LinkedIn Red Hat: Logo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Red Hat is perhaps best known as a Linux operating system vendor, but it is the company’s OpenShift platform that represents its fastest growing segment. Today, Red Hat announced the general availability of OpenShift 4.12, bringing a series of new capabilities to the company’s hybrid cloud application delivery platform. OpenShift is based on the open source Kubernetes container orchestration system, originally developed by Google, that has been run as the flagship project of the Linux Foundation’s Cloud Native Computing Foundation ( CNCF ) since 2014. OpenShift runs across multiple public cloud providers and is also able to run on-premises in private cloud deployments as well. OpenShift is widely used to run any type of workload and in recent years has found increasing traction with artificial intelligence and machine learning use cases. With the new release, Red Hat is integrating new capabilities to help improve security and compliance for OpenShift, as well as new deployment options on ARM-based architectures. The OpenShift 4.12 release comes as Red Hat continues to expand its footprint, announcing partnerships with Oracle and SAP this week. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! IBM reveals OpenShift’s value The financial importance of OpenShift to Red Hat and its parent company IBM has also been revealed, with IBM reporting in its earnings that OpenShift is a $1 billion business. “Open-source solutions solve major business problems every day, and OpenShift is just another example of how Red Hat brings business and open source together for the benefit of all involved,” Mike Barrett, VP of product management at Red Hat, told VentureBeat. “We’re very proud of what we have accomplished thus far, but we’re not resting at $1B.” OpenShift 4.12 giving security a new profile Red Hat OpenShift is based on the open-source Kubernetes project, but it also extends what is available with its own set of open-source features. One of the core areas where Red Hat has invested effort in recent years is with a concept known as a Kubernetes Operator. With an Operator, there is a manifest file that defines how a particular set of services should operate within a Kubernetes cluster. Operators are useful both for initial setup as well as for ongoing operations. Among the new features in OpenShift 4.12 are a pair of Operators designed to help improve security and compliance. Barrett explained that the new Red Hat OpenShift Security Profiles Operator (SPO) provides a way to define secure computing ( seccomp ) profiles and security enhanced Linux ( SELinux ) profiles as custom resources, synchronizing profiles to every node in a given Kubernetes namespace. With Kubernetes, a namespace provides a way to identify different resources running in a cluster. Both seccomp and SELinux provide a set of controls for how system and application processes can (or cannot) be executed given certain constraints. The SPO can work together with other security controls that are native to Kubernetes, including the Open Policy Agent (OPA) Gatekeeper open-source project, which is led by startup Styra. Barrett explained that OPA Gatekeeper is what is known as a Kubernetes admission controller plugin. It enables customers to define admission policies using the OPA policy language called Rego. Barrett noted that OPA Gatekeeper can be used to determine whether a new resource is required to have a seccomp profile to be admitted, but it cannot help with defining custom seccomp or SELinux profiles, which is where SPO now fits in. Red Hat is also updating its Compliance Operator in the OpenShift 4.12 update. The Compliance Operator has been designed to help ensure that a given deployment meets with an organization’s regulatory compliance requirements. Red Hat has long focused on supporting compliance efforts with its platform, introducing the open-source OpenSCAP back in 2015 for its enterprise Linux platforms. OpenSCAP is a scanner that uses the Security Content Automation Protocol (SCAP) supported by the U.S. National Institute of Standards and Technology (NIST). With the OpenShift 4.12 update, the Compliance Operator is able to support a longer list of compliance profiles for government and industry-related regulations. “Red Hat tests and updates the profiles available for the Compliance Operator with every release,” Barrett said. OpenShift gets an ‘ARM’ up OpenShift, like many applications developed in the last several decades, originally was built just for the x86 architecture that runs on CPUs from Intel and AMD. That situation is increasingly changing as OpenShift is gaining more support to run on the ARM processor with the OpenShift 4.12 update. Barrett noted that Red Hat OpenShift announced support for the AWS Graviton ARM architecture in 2022. He added that OpenShift 4.12 expands that offering to Microsoft Azure ARM instances. “We find customers with a significant core consumption rate for a singular computational deliverable are gravitating toward ARM first,” Barrett said. Overall, Red Hat is looking to expand the footprint of where its technologies are able to run, which also new cloud providers. On Jan. 31, Red Hat announced that for the first time, Red Hat Enterprise Linux (RHEL) would be available as a supported platform on Oracle Cloud Infrastructure (OCI). While RHEL is now coming to OCI, OpenShift isn’t — at least not yet. “Right now, it’s just RHEL available on OCI,” Mike Evans, vice president, technical business development at Red Hat, told VentureBeat. “We’re evaluating what other Red Hat technologies, including OpenShift, may come to Oracle Cloud Infrastructure but this will ultimately be driven by what our joint customers want.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,994
2,023
"ARMO shows how ChatGPT can help protect Kubernetes  | VentureBeat"
"https://venturebeat.com/security/armo-shows-how-chatgpt-can-help-protect-kubernetes"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ARMO shows how ChatGPT can help protect Kubernetes Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The impact of ChatGPT and generative AI on the security landscape is difficult to gauge. While threat actors can use these AI-driven solutions to generate phishing emails and malicious code, the use cases for security teams are still emerging. But, a new ARMO integration suggests that ChatGPT can help protect Kubernetes. Today, ARMO, an open source security provider and creator of Kubernetes security tool Kubescape, announced the release of a new ChatGPT integration within the ARMO platform. The new integration enables security teams to build custom controls with ARMO based on Open Policy Agent (OPA), which can be run to ensure Kubernetes clusters and CI/CD pipelines are secure and correctly configured. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! More broadly, the integration highlights that ChatGPT has the potential to be a force multiplier for security teams, which they can use to deploy security controls across the cloud within containerized environments. Protecting the cloud: A use case for ChatGPT and generative AI The release comes as the defensive use cases of ChatGPT and generative AI continue to develop, and just a month after Orca Security released an integration to process security alerts and generate actionable remediation steps to help analysts identify and respond to threats faster within cloud environments. ARMO’s new integration demonstrates that ChatGPT can also be applied to secure Kubernetes deployments. In this particular use case, security teams can generate code and controls in the uncommonly used Repo language by entering queries with natural language. >>Follow VentureBeat’s ongoing ChatGPT coverage<< “ARMO has integrated ChatGPT to help users create their own custom controls without the need to know how to use OPA and Rego,” said Ben Hirschberg, CTO and cofounder of ARMO. “All they need to do is write what they want to check in natural language, and ARMO with ChatGPT will generate the exact control written in Rego with the description and suggested remediation.” This means that security teams can spend less time learning a new coding language, and more time securing their cloud environments against cybercriminals. While this is just one use case for ChatGPT to secure Kubernetes, Hirschberg notes that there are many other ways the tool could be used, from writing YAML files to automating the deployment and security of new clusters. Other security tools for Kubernetes For ARMO, the integration with ChatGPT provides a valuable opportunity to differentiate itself from other providers in the market. One of ARMO’s main competitors is Aqua Trivy , which can scan containerized environments for vulnerabilities while offering automated compliance monitoring and runtime protection for Kubernetes workloads. Aqua Security is currently valued at $1 billion. Another competitor is Checkov , a command-line based tool designed to run infrastructure as code scans on Kubernetes, Terrafor, CloudFormation, Helm and ARM Templates. Palo Alto Networks acquired Checkov’s parent company Bridgecrew for an undisclosed amount in March 2021. Through the use of generative AI and ChatGPT, ARMO hopes to differentiate itself from other providers by augmenting the coding knowledge of users so they can more confidently implement Kubernetes security controls. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,995
2,022
"Intel Geti and OpenVINO efforts advance AI and computer vision | VentureBeat"
"https://venturebeat.com/ai/intel-geti-and-openvino-efforts-advance-ai-and-computer-vision"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel Geti and OpenVINO efforts advance AI and computer vision Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Computer vision is among the most widely deployed use cases for AI today, enabling artificial intelligence (AI) systems to rapidly identify objects and people. The global market for computer vision hardware and software service is forecast to reach $41 billion by 2030, according to Allied Market Research , and it’s a market that is attracting no shortage of vendor interest. At the Intel Innovation 2022 event today, the chipmaker revealed details about its push into computer vision with its Intel Geti platform and OpenVINO toolkit software for AI deep learning and inference. “Computer vision models utilize artificial intelligence to predict and extract valuable information from images and videos,” Adam Burns, VP and director of AI developer tools in the network and edge group (NEX) at Intel said during an Intel Innovation 2022 press briefing. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Burns said the information that computer vision models can detect ranges from identifying a defect during a manufacturing process, to being able to determine how many people are in line at a restaurant. He added that computer vision is used to drive enterprise automation, productivity and innovation across many verticals and is increasing rapidly in demand. Sonoma Creek now rebranded as Intel Geti Intel has been working on developing its own computer vision platform under the codename Sonoma Creek. That effort is now coming to fruition under the rebranded name of Intel Geti. Intel’s goal with Geti is to help accelerate adoption of computer vision, using Intel hardware and software. Geti provides a user interface that enables users to load and annotate data, and train and retrain models. “Intel Geti is a computer vision AI platform that allows anyone in the enterprise the ability to rapidly develop AI models that improve business innovation and digital transformation,” Burns said. “We understand the value of AI and computer vision in the enterprise and we also understand the development barriers to adoption.” Burns emphasized that Intel’s motivation with Geti was to make computer vision more approachable, enabling those who may not have an extensive AI or machine learning background to quickly and easily build high-quality models. Early users of Geti include healthcare While Intel is only publicly announcing Geti today, it already has more than 30 partners active in the technology’s early access program. One such early access user is the Royal Brompton Hospital in the U.K., where clinicians are using Geti to help with their research of rare respiratory conditions. Without any AI expertise, Burns said that the team at Royal Brompton is able to train AI models just as they would a human member of the research team to analyze research data. Having trained AI models helps to accelerate the process of processing images. “This solution can help to greatly improve early diagnosis and treatment options for patients with severe respiratory conditions like cystic fibrosis,” Burns said. Another early use case, also out of the U.K., is with visual analytics vendor Sensing Feeling , which is building a solution with Intel Geti to monitor edge-based analytics and improve construction worker safety. “Through the computer vision model created by Intel Geti, their solution can sense when heavy equipment or machinery comes within unsafe proximity to either other equipment or personnel,” Burns said. Crack open the Vino for Intel Geti Geti isn’t the first time Intel has had an initiative for helping build computer vision models. Back in 2018, Intel announced its OpenVINO toolkit designed to help build computer vision models for the edge. Burns said that OpenVINO and Geti are actually complementary technologies that serve different AI modeling needs. “Enterprise users can upload images and rapidly build computer vision models with Intel Geti and then deploy those models using OpenVINO at scale running on Intel hardware,” Burns said. “Intel Geti can output an optimized and ready-to-deploy OpenVINO model with a push of a button, saving additional optimization steps.” At Intel Innovation 2022, the company also announced the release of OpenVINO 2022.2, which adds support for Intel GPU Flex Series data center processors that were launched at the end of August. The updated OpenVINO release also adds a new automated optimization feature that will discover all the compute and GPUs that are available in a system. “With OpenVINO and now Intel Geti, we’ve continued to try and make AI attainable for decision-makers and for developers within the enterprise,” Burns said. “Together, these two products enable the rapid development and deployment of computer vision models.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,996
2,022
"Intel plunges into consumer and datacenter GPUs amid market uncertainty | VentureBeat"
"https://venturebeat.com/data-infrastructure/intel-plunges-into-consumer-and-datacenter-gpus-amid-market-uncertainty"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel plunges into consumer and datacenter GPUs amid market uncertainty Share on Facebook Share on X Share on LinkedIn Jeff McVeigh is VP and GM, Super Compute Group at Intel. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Cloud gaming did not have its greatest week as Google decided to shut down its cloud gaming service Stadia by January. As it did so, the company said its service had proven itself over the past few years, and it would continue providing the technology. On top of that, other rivals such as Nvidia’s GeForce Now, Microsoft’s Xbox Cloud Gaming, and Amazon’s Luna are carrying on with their own services. That’s encouraging to Jeff McVeigh, vice president and general manager of Intel’s Super Compute Group. The company has launched its new data center graphics processing units (GPUs), dubbed the Flex Series. I spoke to McVeigh about this at the Intel Innovation event this week. He sees new markets, like providing cloud gaming to hotel rooms, so gamers can play while they’re traveling. Intel is rolling out GPUs for the datacenter and its $329 Intel Arc 770 for mid-range gaming computers as well. Its timing isn’t the greatest, as there is a glut of GPUs in the market now thanks to a sudden crash in Chinese market orders, a crypto crash and changes to mining, and a general global economic slowdown. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Still, McVeigh is optimistic about the future and Intel’s long-term plans to battle Nvidia and Advanced Micro devices in both data center and consumer graphics markets. McVeigh’s role is to focus on the high-performance computing and data center markets, where the success of applications such as cloud gaming will determine Intel’s own path forward in graphics. Here’s an edited transcript of our interview. VentureBeat: And you’ve got new chips coming out. Jeff McVeigh: Chips coming out! Different products. This is our Flex 140 and 170. Focused around flexible use cases, which is why we came up with the name. It allows us to build our value add around media processing, cloud gaming, AI [artificial intelligence] inference. We’ll enable new use cases in the future. We’ve gone from PowerPoint to reality. VentureBeat: Where are you finding the opportunity in the market? What’s the opening? McVeigh: We initially started with media delivery and cloud gaming. We’re seeing some good traction for both of those. We’re not ready to reveal some of the customers, but some that are in active deployment, and others that are deep in evaluations on those use cases. This is where we’re rolling out our AI enablement solution. We think that’s the next area you’ll see a lot of pull from. But we’re seeing good traction on these first use cases. VentureBeat: Where do you come in on price and performance compared to AMD and Nvidia? McVeigh: Quite well. In terms of density, as an example, for media transfer performance, 1080p, we’re talking about 36 streams versus Nvidia can do seven on an A10. Now, something like an L40, they’re going to have more streams. But you’re looking at 300 watts versus 75 watts. From a TCO standpoint, it’s not only the cost of the capital, but also your operational cost. Power consumption is quite good there. We feel we have a strong offering around media. Our gaming stack, we continue to optimize. I’d say we’re higher density, but maybe not as far ahead. On the AI side, we have more software optimization to do, but I feel like we’re going to be very competitive versus an A10. We don’t know what H10 and others will look like because those aren’t in the market yet, but that’s where we stand right now. Right now, we have two versions. One for Android gaming, where we see a lot of adoption in China, for example, where that use case is very popular. Our Windows cloud gaming stack is in the beta stage right now. We have more optimization to do, game compatibility, support for more legacy games, things we need to optimize for DX11 and so forth. That’s where we aren’t quite so far along versus the Android stack. VentureBeat: It feels like cloud gaming has an interesting opportunity. The Samsung gaming hub TVs–in a time when there was a shortage of consoles, getting a TV that could get you into gaming without a console seemed like a pretty good value proposition. Has that changed already, though, as the market has changed? McVeigh: There have been market changes when it comes to supply versus demand. Regulations in China as far as the amount of time you can spend gaming, that’s also another damper. But let’s take another example. Consider a hotel room. Every hotel room has a TV. A lot of them have smart TVs. Most of them don’t have game consoles, but I can play a game with a smart TV that is streaming from a service. It could be local to the hotel or something from a CSP and so forth. We see interesting opportunities there that go beyond the traditional console in the home environment. VentureBeat: Is there still a good entry point into the market, even though these shortages have turned into a glut? Is it harder to break into the market now than it was just a few months ago? McVeigh: I don’t think it’s better. There are probably more headwinds versus anything, making it easier. But this is a long-term play. How much are you enabling those experiences on any device? We were showing cloud gaming and Android gaming streamed from a data center that was in the other room, but onto a laptop, an Android tablet, and an iPad. It doesn’t matter what device you’re on. You can play that same game, the same experience. Pause it on one, pick it up on the other, and continue on. It’s not always a console-based experience, but more on the go, wherever I am, being able to experience that with that type of quality. The glut of supply I think would be a difficulty if we were just going after console opportunities. But we have more of these use cases. I’m on the go. I want to be between devices. I want to be in environments where I don’t lug around my console with me. That opens it up considerably. Now, you have some downsides, obviously. You might not get the highest performance. You might have some latency issues. But the class of games you can target is pretty wide. It has a different value proposition. VentureBeat: You still come in pretty inexpensive relative to the competition. Is that also part of the strategy? McVeigh: Right, it is. We’re not trying to be–like Pat said, Moore’s Law is alive. We can use volume, both across the client as well as the data center, to have very attractive price points. We can have it packed into very power-efficient solutions that allow you to do it at scale in the data center. VentureBeat: But these aren’t hardware loss-leaders. I’m not sure how to interpret the comments about Moore’s Law that Pat made, and then that [Nvidia CEO] Jensen [Huang] made last week. I guess if you’re Jensen, you might just be saying that because it explains your high price. But how do you look at the contrast? McVeigh: No. We’re not giving these things away. We’re still making a good profit on them. But it’s not obscene profit like maybe the competition has started to try to go after. They’re trying to have excuses for why the price needs to go up. For us, Moore’s Law continues. We have a road map where we’re accelerating, and we can leverage that to deliver value for our customers. VentureBeat: Mining seems to be maybe disappearing from the market. I wonder if everybody sees that as a good thing, that these products are now going where they were intended to go. McVeigh: Going back to supply versus demand, it’s creating some of the rebalancing there. There’s also the inefficiency, if you’re only doing mining, in the power consumption. We’ve announced our Blockscale-based solutions that are more power-efficient for that dedicated workload. That’s adjusting how GPUs are being used in that environment. VentureBeat: Mining made the market a bit more unpredictable. Maybe you could call it volatile. If you remove that from the market, it feels like it’s a good thing for that predictability. It’s now a more understandable market. McVeigh: It’s a very speculative environment. People are spending money based on a market that’s highly volatile. That makes it hard to predict. How much demand is required? Well, today it’s massive. Tomorrow it’s really low. The day after? It’s not easily predictable. VentureBeat: The timing with the CPU seems closely coordinated to have a superior gaming solution out there across the board. McVeigh: You’re talking about the Raptor Lake generation? Yeah. Combining the CPU and GPU, we have some additive capabilities. That’s great. And then on the data center side, obviously we’re coming out with Sapphire Rapids. Those will be paired with our data center GPUs to get the benefits, how we have software that goes between the CPU and GPU. You can load balance appropriately. It’s not like every part of the workload should go to the GPU. Some should stay on the CPU. We’ll have the right balance. VentureBeat: As far as things that drive the demand on the datacenter side, are you seeing anything related to the metaverse? McVeigh: Some of the cloud gaming environments that we’re engaged in — they’re precursors to metaverse. They have a clear use case if you want to do gaming, but then enable metaverse opportunities. I think they view some of these opportunities as stepping stones to get there. VentureBeat: I was thinking in general that it’s a bad idea for Moore’s Law to end if we’re just getting started on the metaverse. McVeigh: It’s almost shooting yourself in the foot. If Moore’s Law is dead, then some companies don’t make sense, like GPU companies. That’s kind of important to the business model, that we’ll continue to have more performance. VentureBeat: I guess their argument is that architecture has never been more important. Clever design, smart design. McVeigh: Right. There’s some agreement there. That’s why we’re doing GPUs. We’re doing GPUs. We’re doing dedicated accelerators. We have FPGAs. We build all of those architecturally. It is the right time to find the right balance of those architectures. But all those are enabled by silicon scaling. Not only the process itself, the wafer boundaries, but how you package that together. Moore’s Law, maybe in the strictest sense, is about transistor density, but there are many dimensions to it in my mind. It’s really around how you pack more performance into a certain cost envelope and power envelope. You do that with different things. You do it with architecture. You do it with 3D stacking. You do it with new materials. All those things come together. VentureBeat: I remember Raja Koduri said we might need 1,000 times more computing power for the metaverse, or a real-time metaverse for billions of people. It sounds like that might be many years away. McVeigh: It’s not as if one day we’ll just suddenly cross over to the metaverse. It’ll be a continuum. I think he was painting a picture of something indistinguishable from the physical world. That said, we have a road map that will get us many orders of magnitude within the next decade. Process, technology, architectural changes. How much do we integrate? And then software. Sometimes you get 10, 100 times better just from tuning the software for the architecture you created. That gets us pretty close to 1,000 times. VentureBeat: Does it mean we’ll have a 3D internet? As far as just interpretations of what qualifies as the metaverse. McVeigh: I don’t know if I have a canonical definition. My view is, some of the things around simulations of the world for manufacturing, for safety, for the digital twins are out there. That’s one aspect of it. One is 3D representation for entertainment and communication, feeling like we’re in the same room together without having to get on a plane to make that happen. All those things, in my mind, constitute the metaverse. VentureBeat: Do you have something that might be the equivalent of what Nvidia has been calling the Omniverse? McVeigh: We do have some work in that space. We probably haven’t done as good a job of branding it and bringing it together. But we have a number of things around advanced rendering technologies that deal with the same data formats and how those come together. Like I said, probably not as well-packaged as what they have, but we have some things in that space. VentureBeat: It feels like that’s a place where some kind of leadership is possible. If you’re pushing the simulation software ecosystem forward on top of the hardware, then that feels like something Intel would do. McVeigh: We want to work with the ecosystem to enable an open version of that, as opposed to, “Here’s a proprietary version. You have to use ours. Good luck.” That’s why we work with game engine providers. We work with others that have defined the data formats and so forth so they can all interact with each other. Now, the positive side of having a complete vertical solution is it is more turnkey. Everything is already there. The downside is you’re locked into a proprietary environment. We’re trying to give the same experience as the turnkey, but with interoperability of components so multiple people can participate. It’s not just one company driving it. VentureBeat: A familiar way of how things play out would be pushing your software ecosystem forward so you can sell more of these. It sounds like you would think that if, say, the metaverse is open and the ecosystem is open, you would sell more of these. McVeigh: Exactly. It’s the “raise all boats” kind of strategy. Make it open so that everyone can participate. There’s more demand for it, more demand for compute. Then let’s make sure our hardware and our systems are highly competitive. Pat’s example is USB. He always tells the joke about his granddaughter thanking him whenever she plugs it in. That’s good for Intel because now PCs are a key part of that ecosystem. VentureBeat: Are we heading that way, to an open ecosystem and metaverse, or do you think we have some things to worry about? McVeigh: I think it’s still early. It could go either way at this stage. We’re still in the earliest stages. Just like our tagline for innovation: “Open, choice, and trust.” That’s how we prefer it. We think the ecosystem benefits from that as well, so we’ll keep going down that path. I think Nvidia is going to focus on a proprietary solution. So, I think it’s open. It’s not decided yet. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,997
2,022
"OPI Project aims to standardize DPUs and IPUs for industry adoption | VentureBeat"
"https://venturebeat.com/data-infrastructure/opi-project-aims-to-standardize-dpus-and-ipus-for-industry-adoption"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OPI Project aims to standardize DPUs and IPUs for industry adoption Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In recent years, silicon vendors have been building out new types of computing architecture beyond just CPUs and GPUs – welcome to the world of data processing units (DPUs) and infrastructure processing units (IPUs). The goal with DPUs and IPUs is to let organizations offload certain data and cryptography as well as artificial intelligence/machine language (AI/ML) tasks to dedicated hardware to accelerate operations. To date, there have been few, if any, standards around DPUs and IPUs to enable interoperability or industry standardization for deployment, management and scheduling, but that’s about to change. Today, the Linux Foundation announced the launch of the Open Programmable Infrastructure Project , which aims to collect open-source efforts around DPUs and IPUs and organize vendors to advance adoption for organizations of all sizes. Founding members of the Open Programmable Infrastructure (OPI) project include Intel, Nvidia, Marvell, F5, Red Hat, Dell and Keysight Technologies. Among the initial projects set to become part of the OPI is the IPDK (infrastructure programmer development kit) that is being developed by Intel and the Diamond Bluff project that is being built by Red Hat and F5. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Red Hat and F5 and a few other companies were working on the Diamond Bluff project and we [Intel] started talking to them and we soon realized that we have similar goals and we should work together,” Kyle Mestery, senior principal engineer at Intel, told VentureBeat. “The goal of Open Programmable Infrastructure Project is to foster a community of open standards and open-source projects around these next-generation architectures, which include DPUs and IPUs.” Market for DPUs and IPUs is new but growing Both IPUs and DPUs have found their way into the architectures and deployment of hyperscalers and cloud providers in recent years. The opportunity for the hyperscalers and cloud providers has been to provide a more granular level of services for users to consume. There is also potential for IPUs and DPUs to help enterprise users as well, which is one of the goals that Mestery said he hopes the OPI will achieve in the coming months and years. Intel has been building out infrastructure processing units (IPUs) in recent years as a form of silicon hardware technology that makes use of field programmable gate arrays (FPGA) and application-specific integrated circuits (ASIC) IPU technology from Intel. Nvidia has also been active in the space building out its BlueField DPUs that are focused on accelerating analytics and AI workloads. Marvell is also active in the market with its Octeon DPU technology, which can be used to help to support 5G data workloads. How to manage DPUs and IPUs at scale Both the IPDK and the Diamond Bluff projects involve looking at different aspects of how an organization can provision IPUs and DPUs in a data center deployment. The OPI project is also talking about the application programming interface (API) layer for IPU and DPU management to help organizations have a unified approach to visibility and control. Mestery emphasized, though, that the OPI isn’t looking to reinvent technologies and will be collaborating with other Linux Foundation initiatives including the LF Networking project and the Cloud Native Computing Foundation (CNCF). Mestery cited the OpenTelemetry project as an example from the CNCF. Its goal is to collect observability data on running operations. He noted that if an OPI open-source project needs telemetry data, it makes sense to leverage OpenTelemetry rather than create something new. As the OPI ramps up, Mestery said he is hopeful that the open-source effort will be able to demonstrate the benefits of what DPU and IPU technology can provide. As a goal, he wants the project to make it easier for organizations to take DPU and IPU technologies and deploy them inside of their own private data centers and use them for a number of purposes, whether it’s for edge computing, private cloud or whatever they want to do with it. “I hope that we can provide a framework that makes these technologies easier for organizations to use,” Mestery said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,998
2,022
"Nvidia moves Hopper GPUs for AI into full production | VentureBeat"
"https://venturebeat.com/games/nvidia-moves-hopper-gpus-for-ai-into-full-production"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Nvidia moves Hopper GPUs for AI into full production Share on Facebook Share on X Share on LinkedIn Nvidia is making its Hopper GPUs. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Nvidia announced today that the Nvidia H100 Tensor Core graphics processing unit (GPU) is in full production, with global tech partners planning in October to roll out the first wave of products and services based on the Nvidia Hopper architecture. Nvidia CEO Jensen Huang made the announcement at Nvidia’s online GTC fall event. Unveiled in April, H100 is built with 80 billion transistors and has a range of technology breakthroughs. Among them are the powerful new Transformer Engine and an Nvidia NVLink interconnect to accelerate the largest artificial intelligence (AI) models, like advanced recommender systems and large language models , and to drive innovations in such fields as conversational AI and drug discovery. “Hopper is the new engine of AI factories, processing and refining mountains of data to train models with trillions of parameters that are used to drive advances in language-based AI, robotics, healthcare and life sciences,” said Jensen Huang, founder and CEO of Nvidia, in a statement. “Hopper’s Transformer Engine boosts performance up to an order of magnitude, putting large-scale AI and HPC within reach of companies and researchers.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! [ Follow along with VB’s ongoing Nvidia GTC 2022 coverage » ] In addition to Hopper’s architecture and Transformer Engine, several other key innovations power the H100 GPU to deliver the next massive leap in Nvidia’s accelerated compute data center platform, including second-generation Multi-Instance GPU, confidential computing, fourth-generation Nvidia NVLink and DPX Instructions. “We’re super excited to announce that the Nvidia H100 is now in full production,” said Ian Buck, general manager of accelerated computing at Nvidia, in a press briefing. “We’re ready to take orders for shipment in Q1 (starting in Nvidia’s fiscal year in October). And starting next month, our systems partners from Asus to Supermicro will be starting to ship their H100 systems, starting with the PCIe products and expanding later on this year to the NVLink HDX platforms.” A five-year license for the Nvidia AI Enterprise software suite is now included with H100 for mainstream servers. This optimizes the development and deployment of AI workflows and ensures organizations have access to the AI frameworks and tools needed to build AI chatbots, recommendation engines, vision AI and more. Global rollout of Hopper H100 enables companies to slash costs for deploying AI, delivering the same AI performance with 3.5 times more energy efficiency and three times lower total cost of ownership, while using five times fewer server nodes over the previous generation. For customers who want to try the new technology immediately, Nvidia announced that H100 on Dell PowerEdge servers is now available on Nvidia LaunchPad, which provides free hands-on labs, giving companies access to the latest hardware and Nvidia AI software. Customers can also begin ordering Nvidia DGX H100 systems, which include eight H100 GPUs and deliver 32 petaflops of performance at FP8 precision. Nvidia Base Command and Nvidia AI Enterprise software power every DGX system, enabling deployments from a single node to an Nvidia DGX SuperPOD, supporting advanced AI development of large language models and other massive workloads. H100-powered systems from the world’s leading computer makers are expected to ship in the coming weeks, with over 50 server models in the market by the end of the year and dozens more in the first half of 2023. Partners building systems include Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro. Additionally, some of the world’s leading higher education and research institutions will be using H100 to power their next-generation supercomputers. Among them are the Barcelona Supercomputing Center, Los Alamos National Lab, Swiss National Supercomputing Centre (CSCS), Texas Advanced Computing Center and the University of Tsukuba. Compared to the prior A100 generation, Buck said the prior system had 320 A100 systems in a datacenter, but with Hopper a data center would only need 64 H100 systems to match that throughput of the older data center. That’s a 20% reduction in nodes and a huge improvement in energy efficiency. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,999
2,022
"Oracle revs to Java 19 for speed and stability | VentureBeat"
"https://venturebeat.com/programming-development/oracle-revs-to-java-19-for-speed-and-stability"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Oracle revs to Java 19 for speed and stability Share on Facebook Share on X Share on LinkedIn The official distribution of Java 19 made its appearance today at Oracle’s JavaOne conference in Las Vegas. The new version folds in several key enhancements for simplifying the life of developers while speeding up some of the complex server-side tooling so it can take advantage of modern hardware, especially the most parallel options. If the version number of Java seems to be rising more quickly than in the past, that’s by design. Oracle is committed to rolling out new official versions of the language twice a year. Maintaining this rhythm makes it possible for new enhancements to work their way into the ecology and reach deployment. “We’re excited about this release,” said Georges Saab, senior vice president of development of Java at Oracle. “It is the 10th release that we will have done under the six-month cadence and we’ve been doing that now for about five years. We’re very pleased with the fact that all of those releases have come predictably, on target, on the date that they were supposed to. We’re really happy with the process that has given us for getting new features into the hands of Java developers more quickly.” The programming language rivalry Java is in competition with several other major programming languages for the hearts and minds of developers and the C-level executives who write the checks. The language has a reputation for being a bit wordy while delivering a rock-solid and fast performance across a wide variety of chips and architectures. Over the last decade, other languages like JavaScript, PHP and Python have gradually borrowed many of the successful ideas at the core of the Java stack. They now offer much better performance thanks to imitating some of the just-in-time compilation techniques of the Java virtual machine (VM). At the same time, they can offer a more modern syntax that is attractive to some developers, especially new ones learning the craft. An open community and a closed enterprise version The official version rollouts are becoming a bit more ceremonial. The enhancements have been in wide circulation for some time as experimental versions. Oracle wants to engage developers through what it calls the Java Community Process so that the language co-evolves with the needs of the developer community. Some of the most significant enhancements are tagged with words like “Preview” or “Incubator” to signal that they may change more rapidly than other, more stable parts of the codebase. “This comes from our dedication to building trust in the Java ecosystem,” explained Saab. “Things that are done in the open JDK community led by Oracle engineers and developers can see all of this work happening as it’s happening. They can read the mailing lists, understand, listen to the design discussions, and see each change in the code as it’s coming in.” While Oracle continues to emphasize and nurture the open-source community of developers that’s grown around Java, they are also pushing a paid option for enterprise customers who are willing to pay for better performance and care. The Java SE Subscription option entitles paying customers to the GraalVM Enterprise version of the VM, as well as access to Java Management Service, a system for monitoring deployed code. Virtual threads and more improvements Teams building out server-side stacks will want to evaluate the virtual threads and structured concurrency tools that are emerging from what Oracle called Project Loom. These virtual threads can be simpler to start up and shut down. In the past, Java’s standard model assigned one operating system thread to each incoming request to a server, an architectural model that allowed all of the requests to be processed independently. The problem, though, is that each thread consumes memory and the size of the RAM effectively limits the number of requests a server can handle. Lately, some simpler technologies like Node.js have won converts by avoiding the threading model, allowing them to handle much higher loads of simple requests with often dramatically less RAM. The new virtual threads make it possible for Java developers to match this performance. Another area that will draw attention will be the ability to reach out to new forms of hardware. First, Oracle will be rolling out a version of the Java VM for the RISC-V, a chip architecture that is found increasingly in some of the new, highly parallel chip designs. It’s not uncommon for some chip designers to talk of packing more than 1,000 RISC-V cores that can operate independently from each other. Java’s new VM makes it possible for Java developers to write code for this platform, which is expected to draw plenty of attention from artificial intelligence (AI) researchers who often want to use highly parallel chips like this to train AI models. At the same time, the new version of the language includes a vector API that makes it simpler for programmers to write code that will process large blocks of data in chunks. The Java VM will be able to assign these to the right cores in compatible hardware, making it possible for the code to run much faster when the right hardware is available. Some of the other new features include revisions to the Java language itself that simplify some of the syntax and also add more structure that can help prevent bugs. Java 19 marks the final rollout of some of the new ideas that were part of what Oracle calls Project Amber. New record patterns are now available in Java 19 as a preview. These can simplify creating and curating some of the data structures juggled by software. Oracle is also adding better tooling for connecting Java code with code written for other languages. They’re beefing up the foreign function interface, which makes it easier for programmers to create hybrid software packages that take advantage of the best features of different languages. Java platform: Both stable and evolving While much of the focus will be on reaching the milestones for development, Oracle also wants to emphasize their continual devotion to creating an open community around the language. They understand that the decision to invest in software languages evolves over years and programmers crave a platform that is both stable and continually evolving to meet the latest needs. Oracle is investing as heavily in building this community as much as the software enhancements that come of the process. “We’ve reached our 1,000,000th certified Java developer and so that’s an exciting milestone,” said Chad Arimura, vice president for Java developer relations at Oracle. “We think that part of our technology and innovation strategy around trust, innovation, predictability — you know, important core values — we also think that applies to the community as well. Trust that there’s going to be a community around you, innovation and ensuring that we’re continuing to innovate the channels that we use to reach those developers, and predictability to ensure that we continue to invest in existing programs that Java developers can use.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,000
2,023
"How ChatGPT in Microsoft Office could change the workplace | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/how-chatgpt-in-microsoft-office-could-change-the-workplace-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How ChatGPT in Microsoft Office could change the workplace | The AI Beat Share on Facebook Share on X Share on LinkedIn Image by Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. [Updated by Editor 1/9, 7:19 pm PT] Over the weekend, The Information reported that Microsoft is looking to add OpenAI’s chatbot technology — currently ChatGPT, soon to be GPT-4 — to its Office suite of productivity technologies, including Word, Outlook and PowerPoint. And late today, Semafor reported that Microsoft, which invested $1 billion in OpenAI in 2019, is in talks to invest another $10 billion in the company. >>Follow VentureBeat’s ongoing ChatGPT coverage<< The stream of Microsoft news made me wonder: How would these apps-on-steroids, used by billions of companies globally, change how we work? Especially once Google gets fully in the game, integrating its own generative AI capabilities into Google Workspace? Will AI become as mundane in our day-to-day work lives as the humble spreadsheet? Far more than a new Clippy News of the plans for Office came only a few days after word spread that Microsoft was planning to embed ChatGPT into its search engine Bing. But talk of ChatGPT being integrated with Word led those of an earlier tech generation to immediately giggle. Why? One word: Clippy. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Clippy , Microsoft’s user interface agent that came bundled with Microsoft Office in 1997 and was personally launched by Bill Gates, was a hopped-up, big-eyed paperclip who popped up to say things like, “It looks like you’re writing a letter. Would you like help?” Clippy was mostly loathed and mocked for his invasive pop-ups. Time even named Clippy one of the worst inventions of all time. Clippy had disappeared entirely by 2007, although he was resurrected as a cultural icon as a retro sticker pack in Teams in 2021. ChatGPT, of course, would be far more than a new Clippy — it could potentially do everything from generate text based on simple natural language prompts and suggest responses to emails to analyze data in Excel and translate text. Tech investor Puneet Kumar called the possibility “crazy powerful” in a tweet, adding that it would “further deepen” Microsoft’s moats in enterprise office tech: ChatGPT and similar models not ready for prime time The Information pointed out that Microsoft Word already uses in-house AI tools, including Turing’s Smart Find feature and At a Glance, which summarizes Word documents. And it has “already quietly incorporated GPT into Word in minor ways,” including in its autocomplete feature. But to implement OpenAI’s ChatGPT or, soon enough, GPT-4, there will be plenty of hurdles to overcome. For one, ChatGPT has a serious accuracy problem , one that is exacerbated by its tendency to sound plausible even when it is dead-wrong. Even OpenAI CEO Sam Altman has admitted the risks. That makes corporate document creation or advanced workplace integration a no-go at the moment. Privacy is also an obstacle. How would Microsoft preserve the privacy of corporate data? It’s hard to imagine a large law firm or financial services company that uses Microsoft Office all day long getting help from ChatGPT right now. Office work will likely change for good Still, the opportunity to significantly power-charge the day-to-day text output of the average enterprise — emails, presentations, reports — is too tantalizing to ignore. As some guy named Kevin tweeted: But it may be some time before it’s clear both how Microsoft and Google can make generative AI tools work for business at scale, as well as how enterprises can deal with what employees create. Forrester analyst Rowan Curran said there are “a lot of open questions about what guardrails and controls enterprises put both on how these tools are allowed to be adopted, and how they can be used once they are adopted.” For example, he told VentureBeat by email, “If I have a text generator on my phone that I used to draft a work email or outline a blog post and then I publish that as part of my job – does my employer need to be concerned about, or at least aware of that?” So much about our digital office life — think PDFs, spreadsheets, smartphones, cloud, digital signatures — have become work ho-hums over the past two decades. Whether it’s ChatGPT in 2023 or not, it seems likely that advances in generative AI are on the path to transforming the workplace as we know it. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,001
2,023
"Nvidia will bring AI to every industry, says CEO Jensen Huang in GTC keynote: 'We are at the iPhone moment of AI' | VentureBeat"
"https://venturebeat.com/ai/nvidia-will-bring-ai-to-every-industry-says-ceo-jensen-huang-in-gtc-keynote-we-are-at-the-iphone-moment-of-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia will bring AI to every industry, says CEO Jensen Huang in GTC keynote: ‘We are at the iPhone moment of AI’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As Nvidia’s annual GTC conference gets underway, founder and CEO Jensen Huang, in his characteristic leather jacket and standing in front of a vertical green wall at Nvidia headquarters in Santa Clara, California, delivered a highly-anticipated keynote that focused almost entirely on AI. His presentation announced partnerships with Google, Microsoft and Oracle, among others, to bring new AI, simulation and collaboration capabilities to “every industry.” “The warp drive engine is accelerated computing, and the energy source is AI,” Huang said. Generative AI capabilities, he said, have “created a sense of urgency for companies to reimagine their products and business models. Industrial companies are racing to digitalize and reinvent into software-driven tech companies to be the disrupter and not the disrupted.” >>Follow VentureBeat’s ongoing Nvidia GTC spring 2023 coverage<< Huang’s keynote kicked off with the iconic “I am AI” opening (that launched in 2017 ) with music that this time around was apparently composed by AI, and arranged by composer John Naesano. Then, Huang launched into a dizzying array of announcements. These included everything from training to deployment for cutting-edge AI services; new semiconductors and software libraries; and a complete set of systems and services for startups and enterprises. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The announcements at GTC, which targets Nvidia’s community of over four million developers, come in the context of Nvidia’s continued AI dominance , particularly in the latest era of generative AI. As detailed in VentureBeat’s recent in-depth feature story , Nvidia got a massive AI head start when the hardware and software company helped power the deep learning “revolution” of a decade ago, and shows few signs of losing its lead as generative AI explodes with tools like ChatGPT. In fact, Nvidia powers ChatGPT: According to UBS analyst Timothy Arcuri, ChatGPT used 10,000 Nvidia GPUs to train the model. Nvidia’s technologies are fundamental to AI, said Huang, recounting how Nvidia was there at the very beginning of the generative AI revolution. In his keynote, Huang recounted how back in 2016 he hand-delivered to OpenAI the first Nvidia DGX AI supercomputer — the engine behind the large language model powering ChatGPT. Nvidia DGX supercomputers, originally used as AI research instruments, are now running 24/7 at businesses across the world to refine data and process AI, Huang reported. Half of all Fortune 100 companies have installed DGX AI supercomputers. “DGX supercomputers are modern AI factories,” Huang said. Nvidia calls DGX the blueprint for AI infrastructure The latest version of DGX features eight Nvidia H100 GPUs linked together to work as one giant GPU. “Nvidia DGX H100 is the blueprint for customers building AI infrastructure worldwide,” Huang said, sharing that Nvidia DGX H100 is now in full production. H100 AI supercomputers are already coming online, he added. Oracle Cloud Infrastructure announced the limited availability of new OCI Compute bare-metal GPU instances featuring H100 GPUs. And Amazon Web Services announced its forthcoming EC2 UltraClusters of P5 instances, which can scale in size up to 20,000 interconnected H100 GPUs. This follows Microsoft Azure’s private preview announcement last week for its H100 virtual machine, ND H100 v5. Meta has now deployed its H100-powered “Grand Teton” AI supercomputer internally for its AI production and research teams. And OpenAI will be using H100s on its Azure supercomputer to power its continuing AI research. Nvidia DGX cloud to bring AI supercomputers ‘to every company’ To speed DGX capabilities to startups and enterprises building new products and developing AI strategies, Huang announced Nvidia DGX Cloud. Through partnerships with Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure, Nvidia DGX Cloud will bring Nvidia DGX AI supercomputers “to every company, from a browser.” DGX Cloud is optimized to run Nvidia AI Enterprise, the world’s leading acceleration software suite for end-to-end development and deployment of AI. Nvidia is partnering with leading cloud service providers to host DGX Cloud infrastructure, starting with Oracle Cloud Infrastructure. Microsoft Azure is expected to begin hosting DGX Cloud next quarter, and the service will soon expand to Google Cloud. This partnership brings Nvidia’s ecosystem to cloud service providers while amplifying Nvidia’s scale and reach, Huang said. Enterprises will be able to rent DGX Cloud clusters on a monthly basis. Custom LLMs and generative AI for enterprises To accelerate the work of those seeking to harness generative AI, Huang announced Nvidia AI Foundations , a family of cloud services for customers needing to build, refine and operate custom LLMs and generative AI trained with their proprietary data and for domain-specific tasks. AI Foundations services include Nvidia NeMo for building custom language text-to-text generative models; Picasso, a visual language model-making service for customers who want to build custom models trained with licensed or proprietary content; and BioNeMo, to help researchers in the $2 trillion drug discovery industry. Huang announced an Adobe-Nvidia partnership to build a set of next-generation AI capabilities. Getty Images is collaborating with Nvidia to train responsible generative text-to-image and text-to-video foundation models. And Shutterstock is working with Nvidia to train a generative text-to-3D foundation model to simplify the creation of detailed 3D assets. Nvidia invented accelerated computing for AI, including deep learning Nvidia invented accelerated computing to solve problems that normal computers can’t, said Huang. “It requires full-stack invention from chips, systems, networking, acceleration libraries, to refactoring the applications.” Each optimized stack, he explained, accelerates an application domain — from graphics, imaging and quantum physics to machine learning. “The application can enjoy incredible speed-up as well as scale-up across many computers. This enabled us to achieve a million X for many applications over the past decade,” he said. The most famous application of Nvidia’s accelerated computing, he noted, was deep learning. In 2012, Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton needed an insanely fast computer to train the AlexNet computer vision model. The researchers trained AlexNet , Huang explained, with 14 million images on GeForce GTX 580 processing and 262 quadrillion floating point operations. The trained model won the ImageNet challenge by a wide margin and, Huang said, “ignited the big bang of AI.” A decade later, the Transformer model was invented and Sutskever, now at OpenAI, trained the GPT-3 large language model to predict the next word. 323 sextillion floating point operations were required to train GPT-3, Huang said — a million times more floating point operations than to train AlexNet. “The result is ChatGPT, the AI heard around the world,” he said. Huang and Sutskever will surely discuss it all, and more, at their Fireside Chat , scheduled for tomorrow at 9 a.m. Pacific. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,002
2,023
"The enduring legacy of Gordon Moore | VentureBeat"
"https://venturebeat.com/ai/the-enduring-legacy-of-gordon-moore"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The enduring legacy of Gordon Moore Share on Facebook Share on X Share on LinkedIn Gordon Moore, chairman emeritus of Intel. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Gordon Moore , the elder statesman of the technology industry, passed away today at the age of 94. He was one of the nation’s greatest citizens as a pioneer of the semiconductor industry and chairman emeritus of Intel, which he cofounded in 1968. He was known for formulating Moore’s Law in 1965. He predicted that the number of components on a chip would double every couple of years or so. That prediction has held up remarkably well for about 58 years. In 1965, chip makers could fit about 64 transistors on a chip. By 1971, Intel could fit 2,300 transistors on its first microprocessor, the Intel 4004. Nvidia can now put 80 billion transistors on a graphics processing unit (GPU), and Cerebras can put 2.6 trillion transistors on a pizza-size silicon wafer. That is the power of exponential growth. And it was the reason why Silicon Valley became a global hub of technology and why America led the tech industry. It’s a sad commentary that Moore died the same month that Silicon Valley Bank declared bankruptcy. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Moore died at his home in Hawaii, surrounded by his family, Intel said in a statement. Moore and his longtime colleague Robert Noyce founded Intel in July 1968 (Andy Grove was considered an employee, but he was often honored as a cofounder as well). Moore initially served as executive vice president until 1975, when he became president. In 1979, Moore was named chairman of the board and chief executive officer, posts he held until 1987, when he gave up the CEO position and continued as chairman. In 1997, Moore became chairman emeritus, stepping down in 2006. I had the pleasure of meeting Moore in earlier days, when he regularly came out to be a beacon for younger leaders of Silicon Valley, which was a bunch of orchards when he arrived in the Bay Area. Like Intel cofounders Robert Noyce and Andy Grove, Moore became one of Silicon Valley’s greatest thought leaders. He noted at one point that the number of transistors built by the chip industry had just about surpassed the number of ants in the world. Such calculations were an inspiration for engineers around the world. And they communicated the scale of the electronics revolution. He was also nice. I last saw him in person in 2015 at an event that celebrated the 50th anniversary of Moore’s Law. He appeared on stage with New York Times columnist and author Thomas Friedman. They talked about how semiconductors, which Moore pioneered at Shockley Laboratories, Fairchild Semiconductor, and finally Intel. At that event, Moore ambled up on stage a little slowly, but he was still as sharp as you’d expect the cofounder of Intel to be. He had a melodic voice and a folksy style. Speaking at the Exploratorium, a monument to science, Moore said, “I was beginning to see in our laboratory that we would get more electronics on a chip, and this was an opportunity to get that message across. I had no idea it would be so precise as a prediction.” The original prediction was that the number of transistors would double every year. In 1975, progress had slowed a little, so Moore revised the prediction to double every two years. Still, that was only a slight miscalculation. More so than anyone else, Moore defined and codified the pace of modern life. Moore’s Law worked like a metronome for Silicon Valley. If you kept up with it, you were successful. If you didn’t, the competition blew past you, according to Silicon Valley author Michael Moore. Moore made his famous prediction in the April 19 issue of Electronics magazine back in 1965. Friedman noted that Moore predicted just about every big tech gadget, save for microwave popcorn. At the time of the 2015 event, Intel Core i5 processor has 3,500 times the processing power of the first Intel microprocessor, the 4004. It has 90,000 times more energy efficiency and 60,000 times lower cost. If cars made the same kind of progress, you could go 300,000 miles per hour and your car would cost 4 cents, said then-Intel CEO Brian Krzanich. While Intel still invests huge amounts in R&D, it has been surpassed by longtime rival Advanced Micro Devices in a lot of ways and the two companies are more competitive than ever. And for decades, Intel ruled the microprocessor industry that it invented in the days of Moore’s tenure at Intel. Asked what the biggest lesson of Moore’s Law, Moore said, “Once I made a successful prediction, I avoided making another.” The crowd laughed. Extrapolating for ten years was pretty wild to Moore, who specialized in self-deprecation. “The fact that it has gone on for 50 years was astounding,” he said. He said Moore’s Law won’t last forever. But he said it would work for five or ten years if you apply good engineering. He said he hoped the industry wouldn’t hit a dead end. Many predicted that the industry would hit a standstill on progress decades ago. But while many experts are now doubting that we can stay on the path of Moore’s Law, Intel CEO P at Gelsinger said that Moore’s Law was alive and well. That very same week, Nvidia CEO Jensen Huang said that Moore’s Law was dead. I noted that was not good timing, as the tech industry was about to inaugurate the race to produce the metaverse. In 2015, Friedman noted that 47% of jobs could be wiped out by automated technology such as artificial intelligence. Moore said, “Don’t blame me for any of that.” Moore was humble. Carver Mead, a Caltech professor, coined the term “Moore’s Law.” Moore said that for the first two decades, he couldn’t utter the term “Moore’s Law” because it was so embarrassing. After that, he was eventually able to say it with a straight face, he said. Asked if Moore’s Law or Murphy’s Law were more popular on Google, Moore said, “Oh, Moore’s Law beats it by a mile.” Grove and Moore When Intel started in July 1968, Robert Noyce served as CEO while Moore was executive vice president. In 1975, Moore became president. In 1979, Moore became chairman and chief executive officer, and he remained in that position until 1987, whereupon he became chairman. He was named chairman emeritus in 1997. Grove succeeded Moore as CEO. Grove, a Hungarian immigrant who had to flee the communists in the 1950s, was much more of an aggressive and hard-charging competitor, espousing the philosophy “Only the Paranoid Survive,” which was the title of his memoir. Intel pioneered chips such as dynamic random access memory (DRAM), which is still used as the main memory in personal computers and other devices today. But in the 1970s, the Japanese were “dumping” memory chips at below-cost prices in order to drive rivals out of the market. In a famous strategic retreat, Grove had a conversation with Moore. They asked each other what would happen if a new CEO took the helm at Intel. The answer was obvious. Intel would exit the DRAM business. So Grove said he suggested the unthinkable, “Why don’t we do that ourselves?” Intel did exactly that and refocused on its fledgling microprocessor business. That turned out to be one of the best business decisions of all time, as Intel won a deal with IBM to get its microprocessor into the IBM PC. Intel soon had an enduring 80% market share of the PC microprocessor business and it became the biggest chip company in the world. Grove’s decisions inspired books such as Clayton Christensen’s The Innovator’s Dilemma , which points out how rare it is for a company to decide to disrupt its own business by shelving a product it has pioneered. Grove was thinking offensively, seeking to disrupt Intel’s business before someone else did it. He worried about things like discerning “signals from noise” and looking for those special moments that were “strategic inflection points,” when something big changed and the trends were truly turning in another direction. The future of Moore’s Law In 2022, Intel announced that its researchers foresee a way to make chips 10 times more dense through packaging improvements and a layer of a material that is just three atoms thick. And that could pave the way to putting a trillion transistors on a chip package by 2030. During the 2015 event, Moore said, “I can’t see anything else that has gone on for such a long time with exponential growth.” Moore said he got interested in chemistry when he was young by playing with explosives that he created with his chemistry set. He played around with nitroglycerin and was on the road to making dynamite. “Really?” Friedman said in surprise. Moore said he is excited about the frontiers of tech such as robotics, which his grandchildren are working on. “Our position in the world in fundamental science has deteriorated pretty badly,” he said. “Other countries are spending more on basic research than we are, and ours is becoming a lot less basic.” “He was a giant in the semiconductor and computer world and leaves behind an amazing legacy,” said longtime analyst Tim Bajarin of Creative Strategies. Back in 2015, Harvey Fineberg, president of the The Gordon and Betty Moore Foundation, said that in 1965, the U.S. was investing 10% of its budget in research and development, and now that figure has fallen to less than 4%. Fortunately, in 2022, Congress passed the Chip and Science Act, and President Joseph Biden signed it into law. It sets aside tens of billions of dollars for investment in chip factories in the U.S. in an attempt to bring them back from foreign shores. During his lifetime, Moore dedicated his focus and energy to philanthropy, particularly environmental conservation, science and patient care improvements. Along with his wife of 72 years, he established the Gordon and Betty Moore Foundation, which has donated more than $5.1 billion to charitable causes since its founding in 2000. “Those of us who have met and worked with Gordon will forever be inspired by his wisdom, humility and generosity,” said Fineberg today, in a statement. “Though he never aspired to be a household name, Gordon’s vision and his life’s work enabled the phenomenal innovation and technological developments that shape our everyday lives. Yet those historic achievements are only part of his legacy. His and Betty’s generosity as philanthropists will shape the world for generations to come.” Intel continues to introduce new concepts in physics with breakthroughs in delivering better qubits for quantum computing. Intel researchers work to find better ways to store quantum information by gathering a better understanding of various interface defects that could act as environmental disturbances affecting quantum data. Gelsinger, Intel CEO, said in a statement, “Gordon Moore defined the technology industry through his insight and vision. He was instrumental in revealing the power of transistors, and inspired technologists and entrepreneurs across the decades. We at Intel remain inspired by Moore’s Law, and intend to pursue it until the periodic table is exhausted. Gordon’s vision lives on as our true north as we use the power of technology to improve the lives of every person on Earth. My career and much of my life took shape within the possibilities fueled by Gordon’s leadership at the helm of Intel, and I am humbled by the honor and responsibility to carry his legacy forward.” Asked what he wished he had predicted, he said, “I wish I had seen the applications earlier. To me the development of the Internet was a surprise. I didn’t realize it would open up a new world of opportunities.” He added, “We have just seen the beginning of what computers will do for us. The evolution of machine intelligence. It is happening in incremental steps. I never thought I would see an autonomous vehicle driving on our highways.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,003
2,023
"Nvidia and Quantum Machines promote quantum-classical computing at GTC | VentureBeat"
"https://venturebeat.com/programming-development/nvidia-quantum-machines-promote-quantum-classical-computing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia and Quantum Machines promote quantum-classical computing at GTC Share on Facebook Share on X Share on LinkedIn used 3/22/2023 At its annual GTC event, Nvidia announced a partnership with Tel Aviv-based Quantum Machines to create a state-of-the-art architecture for quantum-classical computing. The collaboration intends to bring about purpose-built infrastructure for quantum computing and GPU supercomputing capable of real-time quantum error correction. Known as DGX Quantum , the first system is expected to deploy to the Israel Quantum Computing Center. >>Follow VentureBeat’s ongoing Nvidia GTC spring 2023 coverage<< Why go for a hybrid quantum-classical architecture? It’s an effort to fill a capability gap while pure-bred quantum computing remains under construction. Quantum-classical bridges pair quantum algorithms or hardware with existing classical systems, so hybrid work can start here and now. “If you’re trying to build the revolutionary computer [of the future], you still have to use the most revolutionary computer of the current time to create the ground truth to know whether the quantum computer is generating the right answers,” Jensen Huang, Nvidia’s CEO, said at a GTC press conference. “You can’t just take algorithms developed for classical computing and think that’s going to be appropriate for quantum,” he said. The quantum challenge Today’s quantum computers have limited qubit-counts, and face serious error correction problems. But researchers continue to develop quantum algorithms that will exploit such computers when they eventually scale up. Nvidia’s GPU-based DGX Quantum addresses these challenges. It matches an Nvidia Grace Hopper system with the CUDA Quantum open-source programming model and with the OPX quantum control platform from Quantum Machines. The combination allows researchers to build applications that integrate quantum methods with cutting-edge classical computing, delivering calibration, control, quantum error correction and hybrid algorithms. At GTC, Timothy Costa, Nvidia’s director of HPC and quantum computing products, cautioned that achieving quantum advantage is a difficult task, requiring solutions to numerous open challenges. One challenge is performing error correction on hundreds of thousands to millions of qubits, which requires petascale computing and optimal latency between the compute and the QPU within the duration time of qubit coherence. Another difficulty is that each qubit must be calibrated with numerous independent parameters needing optimization. That’s where DGX Quantum comes into play. Itamar Sivan, CEO and cofounder of Quantum Machines, said that the DGX Quantum system has the potential to significantly reduce the barriers to integrated high-performance computing and quantum computing infrastructure. He predicts this integration will enable quantum infrastructure to scale faster and meet the increasing demand for quantum computing. The states of quantum computing The quantum-classical work anticipated at the Israel Quantum Computing Center betokens a trend of governments helping to sponsor quantum initiatives. At least 17 countries have invested in national programs for quantum technology research and development, according to a report by the World Economic Forum. Governments worldwide are making significant investments to support research institutes developing quantum computing technology. China, the U.S., Australia and countries of the European Union are among those investing in quantum computing initiatives. The Canadian government recently announced a plan to invest at least $355 million (USD) in quantum talent, advancing the application of quantum technology and commercializing quantum computing as part of a new National Quantum Strategy. Similarly, the U. K. has announced a regulatory framework to support innovation in, and the ethical use of, quantum technologies. Meanwhile, universities such as the Massachusetts Institute of Technology, Princeton University and the University of Waterloo are working collaboratively on developing quantum computer prototypes. Nvidia as a quantum platform A key player in quantum computing, Nvidia already boasts a long list of offerings that aim to accelerate quantum research, algorithm design, the development and discovery of applications, and tackling the challenge of building quantum integrated supercomputers — thereby taking the first steps to delivering on the promise of quantum computing at an industry level. With the Nvidia quantum platform, researchers can simulate quantum processors at scale, and with performance far beyond what can be achieved on physical quantum processors today. This will enable them to design and develop better quantum algorithms for the processors of tomorrow. CUDA quantum developers can discover and test integrated quantum classical applications, using CPUs, GPUs, simulated QPS, and physical QPS together, each handling the parts of the workflow it does best. And now, with the addition of DGX Quantum, customers can deploy tightly integrated quantum classical systems capable of using real-time GPU compute to make error correction, calibration control and hybrid algorithms possible at scale. Until the quantum comes along Nvidia’s Costa pointed out that the Nvidia quantum platform is focused on enabling and collaborating with the entire quantum computing-related economy, which has rapidly expanded over the past two years. Nvidia has partnered with quantum hardware builders, software companies and simulation frameworks, as well as system builders and integrators, major CSPs and research centers worldwide. Among Nvidia’s quantum partners beyond Quantum Machines are Atom Computing, IonQ and Oxford Quantum Circuits. At GTC, Nvidia CEO Huang noted that “the quantum research community and development community is really vibrant around the world. There are a whole lot of interesting things to go and solve.” Still, he cautioned that it is “solidly a decade and two decades away to have … broadly useful quantum systems.” Even when those systems come along, quantum-classic hybrids will likely still be at work. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,004
2,022
"Quantum computing pioneer D-Wave looks at the technology's past, present and future | VentureBeat"
"https://venturebeat.com/ai/quantum-computing-faces-the-ghosts-of-its-past-present-and-future"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Quantum computing pioneer D-Wave looks at the technology’s past, present and future Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Quantum computing could be a disruptive technology. It’s founded on exotic-sounding physics and it bears the promise of solving certain classes of problems with unprecedented speed and efficiency. The problem, however, is that to this day, there has been too much promise and not enough delivery in the field, some say. Perhaps with the exception of D-Wave. The company that helped pioneer quantum computing over 15 years ago has clients such as BASF, Deloitte, Mastercard and GlaxoSmithKline today. Alan Baratz went from running D-Wave’s R&D to becoming its CEO, taking the company public while launching products and pursuing new research directions. In an exclusive interview, Baratz spoke to VentureBeat about quantum computing fundamentals and how this is related to the market’s current state, real-world clients and use cases, and what the future holds for this space. Quantum computing hype and reality Baratz has a diverse background that includes product management stints at Avia and Cisco, startup CEO stints and exits, as well as venture investment experience. What he considers closer to the work he is doing today with D-Wave, however, is being the first president of Javasoft at Sun Microsystems. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! At Javasoft, Baratz was responsible for bringing the Java technology to market, building the developer ecosystem and growing revenue. As he noted, a lot of what he did there is similar to what D-Wave is doing now: creating a new industry and building a new ecosystem. Of course, there are some fundamental differences. Java worked from Day 1, albeit not perfectly, and was built on existing infrastructure. From there, the technology grew, matured, and conquered the software development world. Last but not least, there is no fundamental technology divide in Java and, even though there may have been some hype and controversy around it at some point, it’s long been a proven technology. Quantum computing, on the other hand, is a radically new concept that took years of R&D to develop and isn’t aimed at software developers. There is a fundamental technology divide in quantum computing, which Baratz explained is the source of both D-Wave’s place in the market as well as the hype. And yes, there’s lots of hype around quantum computing. According to McKinsey , the current state of quantum computing is between hype and revolution. According to the managing director of research at Bank of America, Haim Israel , quantum computing will be “bigger than fire.” According to quantum computing expert, Sankar Das Sarma , quantum startups are all the rage, but it’s unclear if they’ll be able to produce anything of use in the near future. Baratz’s own position seems to be somewhere in between the above, drawing a line between quantum computing applications today and in the future, as well as between D-Wave and the competition. “While everybody else in the quantum industry talks about government research grants as revenue and national labs and academic institutions as customers, we talk about companies like Mastercard, PayPal, GlaxoSmithKline, Johnson and Johnson, Volkswagen, BASF, Deloitte, SavantX and the port of L.A.,” said Baratz. Quantum computing history and fundamentals The dividing line between D-Wave and the competition that Baratz drew coincides with the line between the two different ways of building quantum computers: quantum annealing and gate models. As Baratz explained, when D-Wave embarked on the task to build a quantum computer over 15 years ago, it was thought that a gate model system could solve all problems. Quantum annealing, on the other hand, was known to only be able to address certain classes of problems. There are four categories of problems that quantum computers can solve: optimization, linear algebra, factorization and differential equations. Baratz provided examples of applications for each: machine learning for linear algebra, cryptography for factorization and computational fluid dynamics and quantum chemistry for differential equations. Optimization has a wide range of applications in physics, biology, engineering, economics and business. As Baratz noted, annealing quantum computers are very good at optimization problems. They can also solve linear algebra and factorization problems, but they cannot solve differential equation problems. Back when D-Wave set out to build its quantum computer, the science and the engineering had not yet progressed to the point where it was believed that you could build a gate model system, Baratz explained. However, he added, it was widely accepted that you could build an annealing quantum computer. So D-Wave decided to go ahead and build an annealing system because that was something they believed they could do. “Everybody else concluded that they might as well build a gate model system because they believed they [eventually] could and it could solve all problems, whereas annealing, it was known, could only solve a subset of the problems. So, everybody else jumped into gate. What happened was: a year ago, everybody got surprised, us included, because that’s the point in time at which it was proven that gate model systems can’t really deliver a speed-up on optimization problems”, Baratz noted. Gate model systems are very good at differential equations problems. They can also attack linear algebra and factorization, but they cannot address optimization problems, Baratz said. In a nutshell, annealing can’t solve differential equations, while gate can’t solve optimization. As optimization has many potential applications, it turns out that’s pretty important. D-Wave took what looked like a more conservative approach originally and was vindicated in retrospect. Baratz called this “a fluke of history that worked out really well for us.” By now, D-Wave has the first-mover advantage in annealing. This means they don’t just have expertise and technology others don’t, they also have a number of patents. All of that results in an effective moat for the company. The problem with quantum computing A year ago, D-Wave concluded that their annealing quantum computers had achieved commercial status. That means that they were capable of solving real business problems at commercial scale and “a lot, if not most, of the hard underlying technological problems had been solved,” as per Baratz. As the company had some bandwidth, they decided to initiate a gate model program that would allow them to eventually be able to address the full market for quantum. Therefore, D-Wave also has firsthand experience of the issues gate model-based efforts are facing. The most severe one is dealing with errors. In conventional computing, bits are used for calculations and for storing information. The equivalent in quantum computing is qubits, and there is lots of talk about how many qubits each system can manage. The problem, however, is that more in this case does not necessarily mean better. Qubits are much more sophisticated than bits, but there are many more ways that errors can be introduced, too. That typically happens by interacting with the environment, for example via electromagnetic interference. As Baratz noted, no system, quantum or otherwise, is error-free. In classical computers, we don’t usually think about errors because there are error-correction algorithms that take care of them. Quantum computers are not there yet. Again, however, there are differences between annealing and gate model systems, according to Baratz. Gate model systems are very sensitive to errors, and that has to do with the way computation is performed. Doing a computation on a gate model system means applying instructions to qubits, similar to applying instructions to bits in classical computers. As soon as an error gets introduced, if it’s not corrected, the computation falls apart. “Since these errors occur so frequently; without error correction, you can’t get through more than 20 or 30 instructions without the introduction of an error and the computation falling apart. But for many of the gate model algorithms, you need tens of thousands, hundreds of thousands or millions of gate instructions. So, you can’t do very much with a gate model system without error correction,” Baratz said. Baratz sees error correction, not number of qubits or topology, as the key to enabling qubits that can have high fidelity through long computations and therefore making progress in the development of gate model systems. His estimate is that we are at least seven to 10 years away from reaching that point today. Annealing-based systems are much more stable, he said, although an increase in number of qubits and better qubit connection topologies would enable them to tackle more complex problems than what they can solve today. Solving real-world problems Baratz referred to fully optimizing FedEx routing from backbone to last mile as a problem that cannot be tackled today, as that would require tens of millions of variables. D-Wave is not there yet; however, a number of important real-world problems can already be solved. At the same time, progress is being made in terms of new computers with more qubits, better connectivity and lower error rates. Baratz also referred to some of the problems that are being solved today, such as customer offer allocation for Mastercard, job scheduling for BASF and supply chain logistics with SavantX and the port of L.A. In that last use case, a 60% improvement in the performance of the cranes loading and offloading the containers and a 12% reduction in the time for vehicles to pick up goods was achieved. Based on Baratz’s description, the philosophy of using gate model-based quantum systems sounds closer to programming classical computers. Using annealing-based quantum systems, however, is very different. There is no programming in the conventional sense involved. Tasks are modeled as optimization problems, which means that users need to declaratively state how their problems are defined, what are the parameters and their interdependence. As Baratz noted, this is not something software engineers are expected to do, but rather something addressed by people like data scientists and data analysts. Optimization problems are often specified as what’s called a linear programming problem or a quadratic programming problem. This is the language that optimization engineers use, Baratz said, and D-Wave allows them to take that specification and feed it directly to hybrid solvers. A hybrid solver utilizes both quantum and classical computers to solve problems. D-Wave has a hybrid solver in its offering, which recently got an upgrade. As Baratz described, the hybrid solver takes problem definitions as input and can determine which parts of the problem can be addressed by the quantum computer. It subsequently routes those parts of the problem to the quantum computer. D-Wave’s offering, traction and roadmap D-Wave offers a cloud service called Leap through which users can access its capabilities: quantum computers, hybrid solvers and software development tools. D-Wave also offers professional services to help clients with things like problem formulation or job submission, where expertise is not available in-house. Given the current state of quantum computing, we wondered whether D-Wave’s clientele is made exclusively of the world’s largest companies. D-Wave is itself a publicly traded company listed on the New York Stock Exchange. As Baratz explained, by going public, D-Wave managed to raise cash and open up a variety of new funding sources. In the call to discuss D-Wave’s recent Q3 results, which Baratz referred to as strong on all levels, the company announced that in the first three quarters of 2022, they had over 100 customers. Of those, 40 are government and education and 60 commercial, of which over 20 are Global 2000. D-Wave has around 40 commercial customers that are not Global 2000, Baratz said, such as a Canadian grocery chain called Save on Foods. D-Wave’s core offering is also available via AWS Marketplace. In addition, D-Wave has a more targeted offering on AWS Marketplace: feature selection for machine learning. Feature selection is one of the most important elements of machine learning. When training a machine learning model, there will be a number of characteristics or classifiers that may be of interest to include. But including all of them will result in overfitting; i.e., generating a model that is not suited for the task at hand. This is why a pre-processing step in machine learning is trying to identify a small set of representative characteristics and then building a model on that set. Finding a small set of strong classifiers from a big set of weak classifiers is a very hard optimization problem, and one in which D-Wave’s system does well. This is often used in fraud detection, Baratz said. Other parts of the machine learning process pipeline are not addressed by D-Wave at this point, because neither its quantum computer nor any of the gate model systems are yet capable of beating GPUs, according to Baratz. Overall, Baratz concluded, the quantum ecosystem is defined by the annealing vs. gate models divide. Annealing is commercial today, while with gate models, things are still at a research and experimentation stage. “We’re the only company in the world that does annealing to address optimization. Now we’re doing gate as well. So, we’ll be the only company in the world that can address the full market for quantum,” Baratz said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,005
2,022
"Cloudflare's post-quantum cryptography protects almost a fifth of the internet | VentureBeat"
"https://venturebeat.com/security/cloudflare-post-quantum-cryptography"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloudflare’s post-quantum cryptography protects almost a fifth of the internet Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The countdown to Y2Q, the day when quantum computers can decrypt public key algorithms, is on. While researchers don’t know exactly when this will happen, the Cloud Security Alliance ( CSA ) estimates this could be as soon as April 14, 2030. Although many organizations are waiting for post-quantum threats to become tangible before taking action against them, other providers like Content Delivery Network (CDN) giant Cloudflare are diving straight in and responding with quantum-safe solutions. Today, Cloudflare announced it has launched post-quantum cryptography support for all websites and APIs served through its network. Essentially, this will introduce quantum computer -proof encryption for all sites using Cloudflare, which accounts for 19.1% of all websites according to W3Techs. Above all, the fact that a prominent security vendor like Cloudflare is committing to post-quantum cryptography highlights that enterprises should take the threat of malicious quantum computers seriously. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Countdown to Y2Q: Why the time for post-quantum is now The announcement comes shortly after Cloudflare announced the release of the first Zero Trust SIM to secure mobile devices, and a $1.25 billion funding program designed to help startups scale their businesses. Now Cloudflare is the first content delivery network to support post-quantum TLS based on NIST’s chosen cyber algorithm. While this decision may seem premature, it’s at the perfect time to prevent harvest now, decrypt later style attacks. Currently, threat actors and nation-states can collect encrypted data with the intention to decrypt it once quantum computing advances to the level necessary to decrypt it. “There is an expiration date on the cryptography we use every day. It’s not easy to read, but somewhere between 15 or 40 years, a sufficiently powerful quantum computer is expected to be built that’ll be able to decrypt essentially any encrypted data on the Internet today,” wrote Cloudflare in the announcement blog post. “Starting today, as a beta service, all websites and APIs served through Cloudflare support post-quantum hybrid key agreement. This is on by default; no need for an opt-in. This means that if your browser/app supports it, the connection to our network is also secure against any future quantum computer,” the post said. The post-quantum cryptography market As quantum computers develop further, interest in post-quantum cryptography continues to grow, with researchers anticipating that the post-quantum cryptography market will reach a value of $476.8 million by 2030, growing at a compound annual growth rate (CAGR) of 18.67%. Of course, Cloudflare isn’t the only provider taking post-quantum threats seriously. Other vendors like PQShield , which announced raising $20 million in funding earlier this year, are leveraging post-quantum cryptography to enable enterprises to develop secure cryptographic solutions for messaging platforms, apps and mobile technologies. Likewise, SandboxAQ , which Alphabet spun off at the start of this year with 9 figures in funding , is combining artificial intelligence and quantum computing together to offer next-generation encryption solutions. The vendor’s Security AQ Analyzer creates a cryptographic inventory to understand an organization’s cryptographic posture and helps plan the move to post-quantum cryptography. Its Security AW Maestro solution then uses machine learning to automate the orchestration of algorithms and protocols to optimize performance for end users. However, Cloudflare’s widespread reach as one of the largest CDN providers in the market gives it the potential to contribute to the most widespread adoption of post-quantum cryptography yet. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,006
2,022
"What the growth of AIops solutions means for the enterprise | VentureBeat"
"https://venturebeat.com/ai/what-the-growth-of-aiops-solutions-means-for-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What the growth of AIops solutions means for the enterprise Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Without exaggeration, digital transformation is moving at breakneck speed, and the verdict is that it will only move faster. More organizations will migrate to the cloud, adopt edge computing and leverage artificial intelligence (AI) for business processes, according to Gartner. Fueling this fast, wild ride is data, and this is why for many enterprises, data — in its various forms — is one of its most valuable assets. As businesses now have more data than ever before, managing and leveraging it for efficiency has become a top concern. Primary among those concerns is the inadequacy of traditional data management frameworks to handle the increasing complexities of a digital-forward business climate. The priorities have changed: Customers are no longer satisfied with immobile traditional data centers and are now migrating to high-powered, on-demand and multicloud ones. According to Forrester’s survey of 1,039 international application development and delivery professionals, 60% of technology practitioners and decision-makers are using multicloud — a number expected to rise to 81% in the next 12 months. But perhaps most important from the survey is that “90% of responding multicloud users say that it’s helping them achieve their business goals.” Managing the complexities of multicloud data centers Gartner also reports that enterprise multicloud deployment has become so pervasive that until at least 2023, “the 10 biggest public cloud providers will command more than half of the total public cloud market.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But that’s not where it ends — customers are also on the hunt for edge, private or hybrid multicloud data centers that offer full visibility of enterprise-wide technology stack and cross-domain correlation of IT infrastructure components. While justified, these functionalities come with great complexities. Typically, layers upon layers of cross-domain configurations characterize the multicloud environment. However, as newer cloud computing functionalities enter into the mainstream, new layers are required — thus complicating an already-complex system. This is made even more intricate with the rollout of the 5G network and edge data centers to support the increasing cloud-based demands of a global post-pandemic climate. Ushering in what many have called “a new wave of data centers ,” this reconstruction creates even greater complexities that place enormous pressure on traditional operational models. Change is necessary, but considering that the slightest change in one of the infrastructure, security, networking or application layers could result in large-scale butterfly effects, enterprise IT teams must come to terms with the fact that they cannot do it alone. AIops as a solution to multicloud complexity Andy Thurai, VP and principal analyst at Constellation Research Inc., also confirmed this. For him, the siloed nature of multicloud operations management has resulted in the increasing complexity of IT operations. His solution? AI for IT operations ( AIops ), an AI industry category coined by tech research firm Gartner in 2016. Officially defined by Gartner as “the combination of big data and ML [machine learning] in the automation and improvement of IT operation processes,” the detection, monitoring and analytic capabilities of AIops allow it to intelligently comb through countless disparate components of data centers to provide a holistic transformation of its operations. By 2030, the rise in data volumes and its resulting increase in cloud adoption will have contributed to a projected $644.96 billion global AIops market size. What this means is that enterprises that expect to meet the speed and scale requirements of growing customer expectations must resort to AIops. Else, they run the risk of poor data management and a consequent fall in business performance. This need creates a demand for comprehensive and holistic operating models for the deployment of AIops — and that is where Cloudfabrix comes in. AIops as a composable analytics solution Inspired to help enterprises ease their adoption of a data-first, AI-first and automate-everywhere strategy, Cloudfabrix today announced the availability of its new AIops operating model. It is equipped with persona-based composable analytics, data and AI/ML observability pipelines and incident-remediation workflow capabilities. The announcement comes on the heels of its recent release of what it describes as “the world-first robotic data automation fabric (RDAF) technology that unifies AIops, automation and observability.” Identified as key to scaling AI, composable analytics give enterprises the opportunity to organize their IT infrastructure by creating subcomponents that can be accessed and delivered to remote machines at will. Featured in Cloudfabrix’s new AIops operating model is a composable analytics integration with composable dashboards and pipelines. Offering a 360-degree visualization of disparate data sources and types, Cloudfabrix’s composable dashboards feature field-configurable persona-based dashboards, centralized visibility for platform teams and KPI dashboards for business-development operations. Shailesh Manjrekar, VP of AI and marketing at Cloudfabrix, noted in an article published on Forbes that the only way AIops could process all data types to improve their quality and glean unique insights is through real-time observability pipelines. This stance is reiterated in Cloudfabrix’s adoption of not just composable pipelines, but also observability pipeline synthetics in its incident-remediation workflows. In this synthesis, likely malfunctions are simulated to monitor the behavior of the pipeline and understand the probable causes and their solutions. Also included in the incident-remediation workflow of the model is the recommendation engine, which leverages learned behavior from the operational metastore and NLP analysis to recommend clear remediation actions for prioritized alerts. To give a sense of the scope, Cloudfabrix’s CEO, Raju Datla, said the launch of its composable analytics is “solely focused on the BizDevOps personas in mind and transforming their user experience and trust in AI operations.” He added that the launch also “focuses on automation, by seamlessly integrating AIops workflows in your operating model and building trust in data automation and observability pipelines through simulating synthetic errors before launching in production.” Some of those operational personas for whom this model has been designed include cloudops , bizops, GitOps, finops , devops, DevSecOps , Exec, ITops and serviceops. Founded in 2015, Cloudfabrix specializes in enabling businesses to build autonomous enterprises with AI-powered IT solutions. Although the California-based software company markets itself as a foremost data-centric AIops platform vendor, it’s not without competition — especially with contenders like IBM’s Watson AIops , Moogsoft , Splunk and others. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,007
2,022
"What headless commerce is and why it's important | VentureBeat"
"https://venturebeat.com/data-infrastructure/what-headless-commerce-is-and-why-its-important"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest What headless commerce is and why it’s important Share on Facebook Share on X Share on LinkedIn "Headless" man working at a PC Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The next stage of ecommerce evolution has been building up for several years. As businesses look to keep up with omnichannel demands, they are likely reading more and more about “headless commerce” and its benefits. Companies should be aware of several aspects of this type of architecture when deciding if headless commerce is suitable for them. Let’s dive into it: What exactly is a headless architecture? In essence, it’s where the frontend presentation layer is decoupled from any of the backend systems. All the backend systems become “headless,” with the front-end presentation layer becoming the “head.” You can have many heads: a website, a mobile app, a watch app, a kiosk in a store. All use the same backend systems in the same way. Backend systems include commerce, content management, product information management and order management, to name a few. The key to these backend systems being capable of operating in this architecture is that they have powerful application programming interfaces (APIs) that enable you to do everything the application can do. Pros and cons of headless commerce There are several benefits to adopting a headless architecture. These include greater flexibility, faster release of new features, seamless experience across all channels, security and scalability. A breakdown of these benefits and how they work: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! System swaps : Some systems or platforms will improve over time, and some will not. The latter will become unsupported or reach their “end of life.” Some may need improvement or upgrades as a business grows. Greater agility : If a new system is needed, it can be added easily through API connections and placed in the head. An example would be a loyalty program being added to an ecommerce company. Add the system once and connect it from each head as needed. More robust : Separation of frontend presentation from backend logic makes the system more robust, as changes in the front will not affect your backend logic. Each system is robust, making the complete system more reliable. While there are several benefits to a headless architecture, there are also potential challenges that need to be addressed: Cost: Separating the frontend and backend systems means each will require its own maintenance and hosting. Having good partners or an already-strong in-house IT department helps mitigate this cost, but it can still be higher than a single system. Complexity: Managing the systems independently of one another means understanding bugs in two different systems or building security for two different systems. Each team will have a learning curve as they build out and implement the separated front and back ends. Getting their systems to the state where the benefits become a reality can be a challenge for many companies. The benefits of headless commerce truly shine when most of the organization’s systems (or at least the systems in one area) have become headless and decoupled. As an organization builds towards this state, it will be in a hybrid world where it’s mixing old and new, working to overcome these challenges. Companies should be fully aware that the headless transition process might be challenging and time-consuming. This might leave a bad taste for those involved in the process until the entire system or a specific area has been transitioned and solved. Don’t solve for one area; plan for digital transformation Companies that embark on this transformation should be aware of its process. You cannot just have a headless commerce and content system to get the real benefits of headless for the customer experience. Customer service, order management, inventory, loyalty and CRM all need to be part of any transition, to mention a few. Without the entire customer experience being architected this way, you will slow down your own transformation and limit the customer experience. Beware of evangelists In this space, it’s very easy to find evangelists — people who are doggedly attached to their version of the future. They often disparage other systems and quickly write them off as “old” or “hard to integrate with.” This is very easy to do when sitting inside a company self-described as “modern.” The reality of digital transformation , systems and architecture is that every company is on a spectrum of transformation, and some are further along than others and have their own challenges. The right way to transform is different for everyone. While there are some poorer systems out there, buyers should be aware that claims from evangelists must be fact-checked or somehow substantiated when evaluating options. Conclusion To answer our original question, does everyone need headless? Organizations must weigh the pros and cons listed above to determine what best suits their needs. Organizations should be wary of the process that adopting this architecture requires. Separation of the frontend “head” from the backend “headless” systems benefits reliability and performance, laying the foundation to move fast in the future. This transformation takes time: Many old and legacy systems cannot function inside a headless architecture, especially within ERP systems (the older they are generally, the worse they are if not rearchitected). Companies must be strategic in their investments as they decouple their systems and begin implementing headless commerce. Understanding the benefits and challenges enables organizations to create a plan that recognizes, and allows them to navigate, the bumps on the road ahead. Gerry Szatvanyi is CEO of OSF Digital Rob Smith is VP of Go-To Market at OSF Digital DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,008
2,023
"How microservices have transformed enterprise security | VentureBeat"
"https://venturebeat.com/security/how-microservices-have-transformed-enterprise-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How microservices have transformed enterprise security Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The microservices revolution has swept across the IT world over the past several years, with 71% of organizations reporting adopting the architecture by 2021. When discussing microservices, we often hear their advantages framed in terms of agility and flexibility in delivering innovations to customers. But one angle that’s not spoken about as much are enterprise security concerns. In the age of monolithic applications, a single security problem could mean hundreds or thousands of man-hours spent rebuilding an application from scratch. Along with having to patch out a security flaw itself, this also meant that DevOps and security teams would have to review and reconstruct the application to tweak dependencies — sometimes having to effectively reverse engineer entire applications. Microservices have upended this paradigm. They allow DevOps to ring-fence security flaws or concerns and address them without worrying about breaking their entire application stack. This doesn’t just mean a quicker turnaround for security patches, but more resilient and efficient DevOps teams and IT stacks overall. How microservices help ring-fence security flaws Stepping back, it’s worth reminding ourselves what a microservice architecture is: A collection of services that are independently deployable and loosely tied together via intermediaries such as APIs. These individual services typically reflect the most fundamental building blocks of your applications. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In practice, containers are the technology used to deliver microservices architectures. These lightweight and standalone packages bundle application code with lightweight OSes, runtimes, libraries and configuration data. By using an orchestration system like Kubernetes , individual containers can exchange their outputs with one another, enabling them to perform the overarching task that would once have been achieved through a monolithic application. The microservices architecture that is most commonly delivered by containers ring-fences many security risks by design. With individual microservices only exchanging their outputs via the intermediary orchestrating them, it’s very difficult for a breach or compromise of a single microservice to permeate the entire application. Playing with the calendar But what does the above mean in practice? Here’s a thought experiment. A few years ago, manufacturers discovered that many consumer devices were rendered unusable if their date was changed to 1/1/1970. Imagine if we introduced that flaw into the calendar application that’s used in an enterprise environment. Now, imagine a black hat attacker spotted the issue before the security team did and then proceeded to obtain someone’s credentials and changed the current date in the calendar app to 1/1/1970. If the enterprise’s DevOps team worked with a monolithic application, they would have to do the following: First, they would have to contend with widespread system malfunctions arising from the attack, which they can’t fix until they address the flaw. Second, assuming they discovered the flaw was with their calendar app, they would have to examine the entire source code for the app and manually find where the problem lies. Finally, they would have to review the entire calendar app’s source code to change any references to variables or statements tied to the bugged lines of code. What does this look like if that same DevOps team worked with a microservices architecture? First, once the black hat attacker had changed the date, they would notice that the particular microservice that contains the flaw is malfunctioning. Second, assuming they’re using containers, their Kubernetes distribution will flag that the particular container isn’t sending valid output data. Finally, it’s a simple matter of the team reverting the offending container’s settings to before the malicious date change. Once they’ve done this initial diagnostic and workaround via a setting rollback, a team can then move to fix the underlying flaws that gave rise to the vulnerability. Throughout this entire process the broader calendar application — and everything that relies on it — has stayed online. Microservices for efficiency and proactivity There’s a big takeaway from the above story: In a microservices architecture, only the flawed component needs to be replaced or updated, not the entire application. This means less downtime when an issue or vulnerability does arise, since teams can identify and revert an individual microservice that’s compromised. Moreover, this creates less work for DevOps and security teams in addressing a flaw because they only need to rework an individual microservice, which is going to necessarily have less application code than a full monolithic app. Additionally, microservices allow teams to be more proactive. Microservices enable this proactivity through the ring-fencing that prevents breaches or cascading vulnerabilities. This ring-fencing frees up teams to continually improve an individual microservice without having to think about the rest of the application. That means a DevSecOps professional can focus on watching out for vulnerabilities or rolling out security updates. There’s no need for administrative or logistical work to stop a security update from breaking another microservice in the application. When it comes to fixing zero-day vulnerabilities or securing your app against emerging threats, this flexibility and freedom is priceless. Because of microservices, teams can respond to security threats far faster and more effectively than ever before. And on the proactive side, microservices can enable teams to harden their systems at a dizzying rate. Altogether, that’s why microservices have changed the face of enterprise IT security: They let developers, operators and security teams work faster and with previously unparalleled flexibility. Simon Wright is UK director of strategic solutions for Red Hat. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,009
2,022
"How to avoid overspending on the cloud using finops | VentureBeat"
"https://venturebeat.com/data-infrastructure/how-to-avoid-overspending-on-the-cloud-using-finops"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How to avoid overspending on the cloud using finops Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Optimizing spend is a number one priority for organizations when it comes to the cloud, according to Flexera’s 2022 State of the Cloud report — and migrating more workloads to the cloud is a close second. How can companies balance these two competing objectives? The answer is finops , a cloud financial management practice that brings together IT, finance, engineering, product developers, IT asset management (ITAM), leadership and others to align on cloud usage and spending goals. Finops is a relatively new term, but the concept is gaining momentum. This is evidenced by the emergence of the Finops Foundation, an organization advancing finops best practices through standards and education. Its latest research , released in June 2022 at Finops X, the community’s largest conference, found that organizations in every major industry, including Global 2000 companies, have finops teams in place. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Practicing finops allows companies to have the best of both worlds: Agile work streams that support rapid innovation without overpaying for cloud usage. However, to successfully deploy finops, you must create a culture of accountability across your organization, starting with clear communication. Competing priorities make it difficult to manage cloud costs Cloud migration introduces new spending complexities, and traditional IT frameworks aren’t set up to manage them. For example, engineers and developers can purchase resources in the cloud without conducting an approval process. This setup enables flexibility and agility (both of which are vital in a fast-paced environment) but leads to ballooning cloud costs. IT leaders often try to establish cloud center of excellence guidelines in response. However, these best practices often clash with engineers’ personal key performance indicators (KPIs), which they must meet to earn bonuses and promotions. Perhaps your IT department identifies the need to reduce uptime. Someone in IT finance asks the engineers and developers to shut down the server for a particular workload and move it elsewhere. However, the engineers want to avoid falling behind on projects that impact their performance reviews, so cost-saving efforts fall by the wayside. Changing this dynamic requires organization-wide communication and goal setting, and it has to start at the top. IT finance teams struggle to make improvements when executives haven’t aligned on finops priorities, causing friction between departments. On the other hand, when the C-suite adopts a cloud strategy without securing buy-in across the organization, your organization may encounter resentment and resistance from teams. 5 strategies for deploying finops in your organization When implementing finops for the first time, don’t run before you walk. It’s a long-term process, so set yourself up for success by ensuring stakeholders communicate priorities and align on goals before moving ahead. Finops, at its core, is about creating a culture of accountability, and organizational culture shifts take time and patience. Begin by identifying opportunities, and then implement policies and KPIs that empower everyone in your organization to take ownership of cloud spending. 1. Start with a cloud diagnostic Begin by gathering members of the C-suite with leaders from key departments like IT, ITAM, finance, devops, engineering and others to discuss your current cloud strategy and how you want to evolve it. Securing buy-in from the executive team enables change to happen much faster. Solicit input from team leads, identify where you may have competing goals, and brainstorm ways to get all departments on the same page. Hiring an external expert to guide the discussion and remove potential roadblocks often speeds up this process. 2. Employ the iron triangle The iron triangle is a project management framework that balances cost, time and scope against quality. You can use it to identify when excessive cloud spending is necessary rather than wasteful. Let’s say you’re developing a new customer-facing application that will differentiate your product, and you need to release it ahead of the competition. Speed is the most critical factor in this case, so you pay 30% more. From a reporting standpoint, the higher expense looks like wasted cloud spend, but you can justify it because it substantially impacts the business. On the other hand, suppose you need to make necessary — but relatively minor — product updates. The iron triangle tells you to either extend the timeline or narrow the scope to avoid unnecessary spending. 3. Create incentives It’s always easier to spend money that’s not yours. Instead of allocating your entire cloud cost to IT, set up a chargeback model that distributes it among departments. Seeing cloud usage as the largest line item on their team’s operating budget motivates managers to reign in costs. One way to mitigate cloud spending at the department level is to set KPIs for optimized codes and workloads that hold individual employees accountable for their share of cloud usage. Tying finops best practices to performance goals allows you to make progress faster. 4. Enable automation As your finops framework matures, lean on automation to streamline workflows. For example, you can preconfigure various instance types that align with business priorities. You can also automate how servers are tagged and, for larger workloads, input justifications for how the migration and increased spend to align with your business goals. Setting up these workflows makes it possible for your finops team to monitor spending without hindering developers’ ability to move quickly. 5. Keep optimizing Creating a finops culture of accountability is an ongoing journey. As technology evolves and your cloud usage grows, you may need to reevaluate priorities and adjust processes and KPIs accordingly. Successful finops requires continuous improvement to ensure alignment and keep cloud spending in check without sacrificing agility. Remain agile while keeping cloud spending in check The cloud is here to stay. However, excessive cloud spending doesn’t have to be. Optimize cloud usage by implementing finops strategies to create a culture of accountability in your organization. When everyone — from leadership down to entry-level employees — works toward the same goals, you can achieve agility and innovation in the cloud without overspending. Dan Ortman is the director of finops services at SoftwareONE. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,010
2,022
"Software needs to make way for containerization and Kubernetes | VentureBeat"
"https://venturebeat.com/datadecisionmakers/software-needs-to-make-way-for-containerization-and-kubernetes"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Software needs to make way for containerization and Kubernetes Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. These days, it’s very common to find yourself using an application or piece of software on your phone for most daily tasks, such as adjusting the temperature in your home or even turning on the lights. But this convenience can, in fact, be a hindrance — and in the context of providing service to an end customer — takes on a slightly different precedence. Service provision is complex and fluid Inflexible software can pose a threat even to the best laid plans of service providers. Service firms are often complex in nature, and through acquisitions or organizations working alongside original equipment manufacturers (OEMs), distributors, aftermarket part manufacturers or contingent employees, they are frequently threatened by a mismatch of cultures. On top of this, service delivery itself doesn’t fit into a neat box; instead, it spans many industries that provide home or mobile services to end customers. The goal for service software is to enhance the service process, helping complete the action — not to disrupt it in any way. But this level of complexity means that many service providers struggle to get their teams coordinated to use the technologies at their disposal. For example, if an HVAC installation provider can only build technician job schedules based on availability loadouts three weeks in advance and can’t update them the day of, they cannot efficiently utilize their time. Changes due to illness, a sudden high-priority outage, or any other day-to-day issues can arise. If software cannot be adaptable, then it is worse than pen and paper. It’s an impediment. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Get to grips with containerization to cut through complexity This is why service providers need to look at containerized applications. Gartner has predicted that by 2023, 70% of global organizations will be running more than two containerized applications — up from just 20% in 2019. The concept of containerization, in its simplest terms, is that software is packaged, with all ancillary processes, enabling it to be deployed at the discretion of the end user. With containerization, service organizations can begin to introduce huge levels of flexibility further down the value chain, whether this is reverse or last mile logistics, virtual reality (VR) or augmented reality (AR). The options are vast. Way up in the cloud or down on the ground: Deployment flexibility at the core of containerization Cloud-based solutions and containerization are intrinsically linked. A cloud-first software product allows service organizations to wholly pass on the IT burden of managing upkeep, upgrades, licenses, and operations. But a containerized product, one that lives natively lives in the cloud, can be just as easily packaged and deployed on a home server with the same internal structure, same APIs, and to the same effect. If your infrastructure requires that, cloud solutions can meet those needs, not dictate the terms of how you interact with the product. Depending on the user, even the deployment of service software requires flexibility. Some service companies simply require, perhaps for regulatory reasons, their solutions to be managed on-premises. Others have a managed cloud space of their own that they want to employ. Others are in a position to move to the cloud. None of these (or any other adoption permutation) is wrong, and software that supports that flexibility will be key. Containerization opens the door to increased agility and new tech — enter Kubernetes Once a service organization has a containerized software architecture deployed in a manner that works for its business, it can begin to introduce huge levels of flexibility further down the value chain. This could be introducing a new business model, such as reverse logistics, or new technician technologies, such as AR and VR and for expert-to-expert or expert-to-customer collaboration. Enter Kubernetes. Kubernetes is the open-source technology that helps facilitate containerization. It’s a ‘must have’ for cloud computing, as it makes it easier to configure systems, increases reliability, allows for quicker software deployment, and improves the efficient use of compute resources. According to a study from VMware, 95% of participants realized benefits from Kubernetes, including 56% who said that they saw improved resource utilization. Kubernetes-enabled software can quicken the pace for service companies to bring new features and capabilities to market and into the hands of customers. In turn, businesses themselves can quickly adapt to changes in the market and regulatory environment, and even turn that agility into a competitive advantage, which from a service perspective, only benefits the end-user tenfold. Peak demand or lulls in business, your services will always be there Containerization benefits are clear — it’s a multi-functional, multi-beneficial software approach that will only enhance service delivery. Kubernetes and containers are built to be highly scalable and can even be set up to scale services up and down in real time. When traffic to those servers increases or decreases, it offers a peace of mind that your services will always be readily available for employees and customers — not limited to the surge of demand dependent on market forces. Raymond Jones is SVP of cloud operations at IFS. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,011
2,021
"Codex aims to enable engineers to collaborate within an IDE | VentureBeat"
"https://venturebeat.com/business/codex-aims-to-enable-engineers-to-collaborate-within-an-ide"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Codex aims to enable engineers to collaborate within an IDE Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Codex, a company that provides a developer tool designed to let engineers communicate directly within an integrated development environment (IDE) , today announced that it has secured $4.4 million in funding and is now in private beta. The seed round will help the company grow its team and onboard even more beta users from its waitlist of more than 200 companies. Codex was a member of the Y-Combinator Summer 2021 startup funding cycle. A month after receiving its Y Combinator funding, Codex began a private beta with 25 companies. Today, Codex’s Beta release is a VsCode extension that enables context-sharing and collaboration as a local-first solution. Codex makes programming multiplayer Generally, when a team member has a question about a code block they would have to find that user in Slack, or with a Pull Request. With Codex, users highlight a code block in their IDE and request context by asking a question. Codex performs the Git function “git blame” and then automatically prompts — via a notification in Codex — the members of the team who worked on the specific lines of code that you’re asking a question about. Codex then holds that context in the correct location of the codebase. Codex is also designed to allow engineers to introduce context by annotating areas of a codebase. “We’re out to save engineers time and headaches by automatically storing and sharing institutional knowledge,” cofounder and CEO Brandon Waselnuk said. “I’ve heard horror stories from so many engineers about answering the same question over and over again in Slack DMs, or multiple pair programming sessions for the same topic filling their calendars.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Many companies have senior staff leaving with all this critical context that’s never been written down or shared. This leads to teams having to, in the worst case, reverse engineer functionality to grok how it works. It’s crazy how much time is spent on this work today,” Waselnuk said, Staff retention is an issue affecting many industries, increasingly in tech. As seen in the recently released Work Trend Index survey from Edelman Data x Intelligence, nearly 41% of people are considering leaving their current employer this year, and there’s a 4.5% increase in tech resignations. A quest for context Codex founders, Waselnuk along with Karl Clement, COO, and Saumil Patel, chief technology officer, say they started the company as a side project in their quest to add a context layer on top of a Git repo to help onboard new engineers into a codebase. They wanted to provide engineers with a tool that could essentially answer why the developer architected software in a certain way, such as what decisions were made to use certain design patterns, or why they chose to use a for loop instead of a dictionary. Codex plans to offer integrations to other modern IDEs, allowing everyone at a company to share context, as well as a desktop application that will let engineers author and share onboarding paths through the codebase. Codex never stores source code and all processing happens locally on the user’s machine, the company claims. The funding round is led by NFX, backed by Y Combinator, and joined by Ludlow Ventures, Emergence Capital, and operator angels. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,012
2,021
"Gitpod nabs $13M for cloud-based open source software development platform | VentureBeat"
"https://venturebeat.com/business/gitpod-nabs-13m-for-cloud-based-open-source-software-development-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gitpod nabs $13M for cloud-based open source software development platform Share on Facebook Share on X Share on LinkedIn Gitpod Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Gitpod , a cloud-based open source development environment, today revealed it has raised $13 million in a round of funding. The German startup also introduced a handful of new features, including native support for Microsoft’s Visual Studio Code editor. The raise comes amid a boom in activity in the browser-based coding sphere, as developers move away from local development environments to the collaboration-friendly cloud — particularly important in a world that has rapidly transitioned to remote work. Moreover, local development can cause problems when it comes to testing performance and security because not everyone has the same technological setup as the developer. Moving to the cloud helps circumvent many of these problems. “Developers are automating the world, yet they waste a lot of precious energy manually setting up and maintaining development environments,” Gitpod CEO Sven Efftinge told VentureBeat. “Millions of developers are slowed down on a daily basis with tedious tasks to get into a productive state while also facing annoying ‘works-on-my-machine’ problems. Our purpose is to remove all friction from the developer experience. This makes everyone always ready to code and software engineering more collaborative, joyful, and secure.” Git together Gitpod broadly adheres to a similar ethos as continuous integration (CI), a popular software engineering practice that involves automatically merging code changes from multiple developers working on the same project. CI is all about ensuring that developers are committing smaller changes more frequently and shipping new code and fixes more quickly. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! From Gitpod’s perspective, the ethos basically means that it “listens” to changes within a git repository and prebuilds the source code whenever someone pushes a change to it — these prebuilds, according to Efftinge, are “key to preparing dev environments that are truly ready to code.” “We invented prebuilds so application code, configuration, and infrastructure can all be stored as machine-executable code in your git repositories and applied to dev environments automatically and continuously,” he said. “We are preparing your whole dev environment even before you start. Only then, you are always ready to code with a single click.” This also highlights a licensing limitation of Gitpod’s open-core open source model, as its free self-hosted offering only includes limited prebuild times. Gitpod works with all the main git platforms, including GitHub, GitLab, and Bitbucket, allowing developers to spin up a server-side (i.e. not local) development environment from any repository in just a few seconds. This includes the IDE (integrated development environment) and all the related tools and dependencies needed to run the project, including compilers, interpreters, runtimes, build tools, databases, and application servers. In short, Gitpod enables developers to start coding immediately, bypassing the local setup and maintenance process entirely. Above: Running Gitpod directly from a GitLab repository Cloud-based coding environments aren’t exactly new , with the likes of Codenvy — which was acquired by Red Hat four years ago — built on the Eclipse Che open source cloud IDE. More recently, we’ve seen a slew of cloud-based developer tools, including GitHub’s Codespaces, which launched in early access last year and is similar to Gitpod in many ways. Then there’s CodeSandbox , which raised $12.7 million in October to help developers create a web app development sandbox in the browser; Replit , a browser-based IDE built for cross-platform collaborative coding that raised $20 million in February ; and CoScreen , which exited stealth last month with $4.6 million in funding to bring multi-user screen sharing and editing to remote engineering teams. Not all of these are exactly the same proposition as Gitpod, but they demonstrate that development environments are shifting away from “local.” Gitpod’s decision last August to release its platform under an open source AGPL license was a big move for the company, one that afforded developers more freedom to deploy Gitpod however they want, whether through a SaaS subscription managed and hosted by Gitpod or self-hosted on Kubernetes, Amazon’s AWS, or Google Cloud Platform. And a native integration with GitLab announced late last year will only serve to deepen its appeal. But Gitpod’s big pitch is that it’s not purely focused on IDE — it’s about automating developer environments in the cloud. The company is currently piloting a feature that allows users to benefit from Gitpod while working with third-party IDEs such as GoLand or IntelliJ and connect to Gitpod containers from their local environment. “We built Gitpod in a way that its architecture is scalable and it can work with other IDEs as well,” Efftinge said. “The feature is currently still in beta, but [it’s] important to understand our future direction.” Open for business It’s worth noting that Gitpod’s target market is developers, so embracing open source makes a great deal of sense. Developers, after all, play a big part in driving companies’ software buying decisions. “Making it open source builds trust and allows users to become contributors as well, or at least take part in the development process,” Efftinge said. “From a business perspective, buying power in companies shifts toward the individual engineer — for Gitpod to be successful, we have to win the hearts and minds of developers.” Founded out of Germany in 2019, Gitpod had previously raised $3 million in funding. Its latest $13 million cash injection was spearheaded by General Catalyst, with participation from Speedinvest, Crane Venture Partners, and Vertex Ventures. Gitpod claims some 350,000 users, including developers from major businesses such as Google, Amazon, Facebook, Uber, Intel, and GitLab, though Gitpod didn’t confirm whether the companies are paying customers or not. “What we can say is that all of those companies have projects where they use Gitpod to streamline their development workflows for either their own developers or for external contributors,” Efftinge said. Alongside its funding, Gitpod also announced today that it now supports Docker and sudo privileges (a Linux program to give temporary root privileges to specific users), which means developers can now run Docker in their workspaces. And Microsoft’s Visual Studio Code will now also work in Gitpod natively. “You get exactly the same editing experience that you would get if you have VS Code installed locally,” Efftinge said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,013
2,016
"Microsoft releases Visual Studio Code 1.0 as the code editor passes 500,000 monthly active users | VentureBeat"
"https://venturebeat.com/business/microsoft-releases-visual-studio-code-1-0-as-the-code-editor-passes-500000-monthly-active-users"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft releases Visual Studio Code 1.0 as the code editor passes 500,000 monthly active users Share on Facebook Share on X Share on LinkedIn Visual Studio Code. Microsoft today is announcing the release of version 1.0 of its open-source Visual Studio Code editor. The cross-platform application now has more than 500,000 monthly active users, after launching last year. More than 1,000 extensions have become available for Visual Studio Code since Microsoft introduced extensibility in November, when it became available on GitHub under an open-source MIT license. The community has closed more than 300 pull requests with an eye toward improving the software. “We feel like we have a good, stable ecosystem that lets us kind of declare GA [general availability],” Microsoft Visual Studio engineering leader Shanku Niyogi told VentureBeat in an interview. In reality, the new features have been available to people participating in the Insiders Program for weeks, but now the beta sticker is going away. Of course, Visual Studio Code isn’t the only code editor. There’s GitHub’s Atom, which has more than 1 million monthly active users , but it’s also older than Visual Studio Code. But this is Microsoft we’re talking about, and Microsoft has provided the Visual Studio integrated development environment (IDE) to developers in a business setting for nearly 20 years. So the company is keen to make sure it’s ready for serious business use. Microsoft has added features that make it easier for entire development teams to adopt — although that’s not to say it wasn’t sufficiently powerful when it came out in the first place, with features like native Git integration and a debugger. Version 1.0 includes localized versions of the editor in nine languages other than English: simplified Chinese, traditional Chinese, French, German, Italian, Japanese, Korean, Russian, and Spanish, according to a blog post. Today’s release also comes with numerous accessibility improvements, based on input from visually impaired people. These features are available through screen readers Windows now, and the plan is to bring them to OS X and Windows later. There’s also a little widget now in the bottom left corner that provides status updates and options for extensions. And you can select text in many columns and edit them all at the same time, too. The ability to have multiple tabs of files running at once in the editor will come in a future update, Niyogi said. You can download Visual Studio Code 1.0 here. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,014
2,022
"Why established and regulated industries are shifting to cloud services | VentureBeat"
"https://venturebeat.com/data-infrastructure/why-established-and-regulated-industries-are-shifting-to-cloud-services"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Why established and regulated industries are shifting to cloud services Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The last two years saw cloud technology heavily encouraged across almost every sector. For businesses wishing to thrive in the chaos of the pandemic, the move to cloud environments became a necessity amidst the shift to remote work and the frequent inability to access data centers. As a result, more businesses than ever — including many in established industries such as manufacturing, retail and healthcare — have accelerated their adoption of cloud-first models and strategies. This approach is empowering these industries with more agility and efficiency in what has been a very uncertain time for the world and thus, for business. But how exactly have businesses in these established sectors managed this impressive shift, and what impact has being cloud-first had on their operations and customers? Cloud services are helping regulated industries thrive Healthcare is a great example of an industry that has the ability to transform societies for the better but is often hamstrung in its efforts, partially due to the sensitivity of the data it handles. This is where the cloud comes in — it can help healthcare leaders balance progress and change with efficiency and security. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In India, for instance, the government launched a new project, eHealth Infra, to help the nation’s underprivileged gain access to a public health insurance scheme. In the past, regulatory and data privacy concerns would have stymied efforts like this. However, private cloud services have caused many of those concerns to fade away. As a result, the project has since been joined by 26 states and covers 45-47% of the Indian population, providing the consistent connectivity that enables citizens to reliably enroll in the service. Finance is another industry that deals with strict regulation concerns due to the sensitive data it handles, in addition to legacy technology concerns. However, since digital-native fintechs started using their cloud-enabled nimbleness to provide customers with next-generation services, established financial institutions have had to reconsider their strategies in favor of more cloud-first approaches. Moving to cloud services has allowed these traditional institutions to increase efficiency and security, offering their customers the same high-quality experience that born-in-the-cloud, new-age fintechs offer. Physical businesses are going cloud-first for more efficiency Even in the incredibly physical world of manufacturing, businesses are finding new ways to adopt cloud-first models. Manufacturers need to keep track of numerous moving parts, from assembly on factory floors to the timely delivery of raw materials. And while some operations are external, in the case of manufacturing execution, workloads need to be closer to the factory floor. So, in these instances, businesses need a distributed channel of technology platforms to leverage edge computing and bring cloud technology closer to on-premise deployments. By using cloud tools to optimize operations and bring all these disparate actions into a single view, a manufacturer can dramatically improve its operations’ efficiency and reduce production timelines, which helps it to sustainably expand. Cloud service: The cloud-first model in action While the advantages of a cloud-first strategy are many, as noted above, it’s also instructive to see how a specific business works with the cloud to gain these benefits. A great example is the case of a large retailer that specializes in designer fashion, accessories and furniture. It was looking to accommodate the growing number of data and applications accruing as it rapidly scaled up, both physically and digitally. It also wanted to flexibly support traffic spikes during periods of discount sales, festivals and holiday seasons — all while maintaining its systems’ security and controlling its costs of ownership and maintenance. By shifting the bulk of its processes, including its critical workloads, to a private cloud, the retailer was able to address all of these requirements, enabling higher performance, security and instant scale-up. Additionally, with a partner helping it tailor a cloud solution to its specific needs, the retailer created secure IP and VPN connectivity among its different sites, allowing for speedy data archiving, billing and real-time backup services to protect against data failover incidents. With the implementation of cloud services, the retailer now has a high-performing, secure, private environment with an availability guarantee of 99.9%. It can instantly spin up additional resources as and when it needs to, so it can cope better with the rapid opening of new outlets and the exponential growth in data that goes with that. And with the 24-hour, 365-days-a-year maintenance and support it receives from its cloud provider — a perk of being a large customer — it can easily acquire additional on-demand resources to help deal with seasonal traffic spikes. Embracing the cloud-first paradigm Consumers are one of the most powerful factors driving organizations to change. We now live in a world that, for the sake of accessibility, must be internet-first. And that affects every business and institution the public interacts with in any fashion. For instance, the UK’s National Health Service (NHS) has an internet-first policy that states, “all new health and social care digital services should be internet facing and existing services should be changed to be made available over the internet as soon as possible.” But businesses looking to make this transition must understand that there isn’t a one-size-fits-all strategy that established sectors can take. That’s why it’s crucial that every business begin by looking inward to see how a cloud-first approach could benefit it, as well as outward for a partner to help it successfully make the transition as non-disruptively as possible. We all know cloud technology will be part of every organization’s future. The true winners in the coming years will be those that figure out not only how to optimize their present operations with a cloud-first strategy, but how cloud services they adopt can help them lay the groundwork for exponential growth in the future. Rajesh Awasthi is Vice President & Global Head of Managed Hosting and Cloud Services at Tata Communications DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,015
2,022
"How scanning GitHub can help secure the open-source software supply chain | VentureBeat"
"https://venturebeat.com/security/open-source-security-github"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How scanning GitHub can help secure the open-source software supply chain Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Supply chain security attacks have changed cybersecurity forever. Ever since President Biden released his Executive Order on Improving the Nation’s Cybersecurity following the Log4j and SolarWinds breach debacles, open-source security has been a top priority for organizations. In fact, research shows that 73% of organizations have adopted measures to secure their software supply chains. Continuing this trend, SaaS security provider Legit Security today announced the launch of Legitify, a new open-source security tool designed to help enterprises secure their GitHub implementations. The solution will enable security and devops teams to scan GitHub configurations at scale and ensure the integrity of open-source software. GitHub supports over 1.5 million organizations and plays an integral role in many organizations’ software supply chains as a source-code management (SCM) solution for storing code updates and identifying issues. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Securing GitHub against the open-source onslaught It’s no secret that vulnerabilities in open-source projects can be devastating. For instance, the remote exploitation exploit Log4j was used as part of over 840,000 attacks within 72 hours of discovery. Legit Security believes that securing GitHub is key to securing the open-source software supply chain, as exploits provide a means to modify source code, harvest secrets and initiate a supply chain attack. For instance, recently the organization disclosed attack vulnerabilities in open-source projects from Google and Apache, including a “GitHub environment injection” within the Google Firebase project that enables an attacker to take control of a project’s GitHub Actions CI/CD pipeline and modify the underlying source code. GitHub occupies a unique place in the open-source ecosystem because, although it’s widely used, it’s often difficult to secure GitHub implementations because it’s time-consuming to discover misconfigurations for each repository. “It’s difficult and time-consuming to consistently enforce security across large GitHub implementations, and GitHub misconfigurations are a very common source of vulnerabilities. Different individuals often deploy GitHub instances with different configurations and settings,” said Legit Security cofounder and CTO Liav Caspi. “However, manually enforcing consistency across large GitHub organizations is very labor-intensive and prone to human error. Legitify addresses this by allowing security teams and devops engineers to manage and enforce their GitHub configurations in a secure and scalable way,” Caspi said. Legitify answers these challenges by enabling users to scan GitHub implementations by a specific instance, resource type or entire organization via the command line so they can detect security issues, categorize their severity and review remediation steps. Other GitHub scanning solutions It’s important to note that Legit Security’s solution isn’t the only tool capable of scanning the security of GitHub code. GitHub Code Scanning , released in 2020, is a native solution that integrates with GitHub Actions to scan code as it’s developed and provides users with security reviews to identify vulnerabilities. Another tool offering this capability is SonarQube GitHub Action , which allows the user to employ a SonarQube scanner to detect bugs and vulnerabilities in code in over 20 programming languages. SonarQube’s parent company, SonarSource, raised $412 million in funding earlier this year to scan codebases for vulnerabilities. “Legitify is a unique open-source security tool designed for large enterprise deployments of GitHub. Legitify connects to GitHub via an access token and detects issues across four resource types: member, repository, actions and organization,” Caspi said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,016
2,022
"Study provides insights on GitHub Copilot’s impact on developer productivity | VentureBeat"
"https://venturebeat.com/ai/study-provides-insights-on-github-copilots-impact-on-developer-productivity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Study provides insights on GitHub Copilot’s impact on developer productivity Share on Facebook Share on X Share on LinkedIn GitHub Copilot Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Recently, writing software code has become a promising use case for large language models like GPT-3. At the same time, like many developments in artificial intelligence (AI), there are concerns about how much of the excitement surrounding large language model (LLM)-powered coding is hype. A new study by GitHub shows that Copilot, its AI code programming assistant, results in a significant increase in developer productivity and happiness. Copilot uses Codex , a specialized version of GPT-3 trained on gigabytes of software code, to autocomplete instructions, generate entire functions, and automate other parts of writing source code. The study comes one year after GitHub launched the technical preview of its Copilot tool and just a few months after it became publicly available. GitHub’s study surveyed more than 2,000 programmers — mostly professional developers and students, who have used Copilot throughout the past year. While AI-assisted coding is still a new field and needs more research, GitHub’s study provides a good look at what to expect from tools such as Copilot. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Happiness and productivity According to the GitHub’s findings, 60–75% of developers feel “more fulfilled with their job, feel less frustrated when coding, and can focus on more satisfying work” when using its Copilot tool. Feeling fulfilled and satisfied is a subjective experience, though there are some common traits across what developers reported. “Knowledge workers in general – and that includes software developers – are intrigued and motivated by problem-solving, and creativity,” GitHub Researcher, Eirini Kalliamvakou, told VentureBeat. “For example, a developer tends to find it more satisfying to think about what design patterns to use, or how to architect a solution that implements a particular logic, drives an outcome, or solves a problem. Compared to that, the rote memorization of syntax or ordering of parameters is considered ‘toil’ that most developers would love to get through quickly.” Copilot also helps developers “preserve mental effort during repetitive tasks,” 87% of the respondents reported. These are tasks that are frustrating and prone to mistakes, such as writing a SQL migration to update the schema of a database. “With the exception of database administrators, developers may not write SQL migrations often enough to remember all of the particular SQL syntaxes,” Kalliamvakou said. “But it’s a task that happens often enough for the mental cost of the non-immediate recall to add up. GitHub Copilot removes much of the effort in this scenario.” Developers tend to “stay in the flow” when using Copilot, the survey found — meanings they spend less time browsing reference documents and online forums like StackOverflow to find solutions. Instead, they prompt Copilot with a text description and get a code that is mostly correct and might need a bit of tweaking. Faster task completion More than 90% of the survey’s respondents reported that Copilot helps them complete tasks faster — a finding that was expected. Though, to further measure the speed improvement, GitHub conducted a more thorough experiment, recruiting 95 developers and giving them the task of writing a basic HTTP 1.1 server from scratch in JavaScript. The participants were divided into two groups, a test group of 45 developers who used Copilot and a control group of 50 developers who did not use the AI assistant. While task completion was not overwhelmingly different between the two groups, completion time was. The Copilot group was able to complete the server code in less than half the time it took for the control group. While this is an important finding, it would be more interesting to see which types of tasks Copilot helped more with and which areas required more manual coding. Although GitHub did not have figures to share in this regard, Kalliamvakou told VentureBeat that she and her group are “performing more analysis on the code the participants wrote, and plan to share more in the near future.” Code review and security It is worth noting that LLMs do not understand and generate code in the same way that humans do, which has raised concerns among researchers. One of these concerns, which is also mentioned in the original Codex paper , is the possibility of AI tools providing erroneous and possibly insecure code suggestions. There are also concerns that over time, developers could start accepting Copilot suggestions without reviewing the code it generates, which can cause vulnerabilities and open new attack vectors. While GitHub’s new study does not have any information on how Copilot affects secure coding practices, Kalliamvakou said that GitHub continues to work on improving the model and code suggestions. Meanwhile, she stressed that suggestions by GitHub Copilot should be “carefully tested, reviewed, and vetted, like any other code.” “As GitHub Copilot improves, we will work to exclude insecure or low-quality code from the training set. We think in the long-term, Copilot will be writing more secure code than the average programmer,” Kalliamvakou said. Kalliamvakou added that GitHub’s studies of Copilot have revealed new areas where AI can help developers, including support for Markdown, better interaction between Copilot and Intellisense suggestions, and using the tool in other parts of the software development lifecycle, including testing and code review. “Our largest investment is in improving the model, and the quality of suggestions provided by GitHub Copilot since that is the source of the noticeable benefits our users experience,” Kalliamvakou said. “Over time, we expect that GitHub Copilot will be able to remove more of the boilerplate and repetitive coding that developers see as taxing, creating more room for job satisfaction and fulfillment.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,017
2,022
"Infrastructure as code and your security team: 5 critical investment areas | VentureBeat"
"https://venturebeat.com/security/infrastructure-as-code-and-your-security-team-5-critical-investment-areas"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Infrastructure as code and your security team: 5 critical investment areas Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The promises of Infrastructure as Code (IaC) are higher velocity and more consistent deployments — two key benefits that boost productivity across the software development lifecycle. Velocity is great, but only if security teams are positioned to keep up with the pace of modern development. Historically, outdated practices and processes have held security back, while innovation in software development has grown quickly, creating an imbalance that needs leveling. IaC is not just a boon for developers; IaC is a foundational technology that enables security teams to leapfrog forward in maturity. Yet, many security teams are still figuring out how to leverage this modern approach to developing cloud applications. As IaC adoption continues to rise, security teams must keep up with the fast and frequent changes to cloud architectures ; otherwise, IaC can be a risky business. If your organization is adopting IaC, here are five critical areas to invest in. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Building design patterns Constantly putting out fires from one project to the next has created a challenge for security teams to find the time and resources to prioritize building foundational security design patterns for cloud and hybrid architectures. Security design patterns are a required foundation for security teams to keep pace with modern development. They help solution architects and developers accelerate independently while having clear guardrails that define the best practices security wants them to follow. Security teams also get autonomy and can focus on strategic needs. IaC provides new opportunities to build and codify these patterns. Templatizing is a common approach that many organizations invest in. For common technology use cases, security teams establish standards by building out IaC templates that meet the organization’s security requirements. By engaging early with project teams to identify security requirements up front, security teams help incorporate security and compliance needs to give developers a better starting point to build their IaC. However, templatization is not a silver bullet. It can add value for select commonly used cloud resources, but requires an investment in security automation to scale. Security as code and automation As your organization matures in its use of IaC, your cloud architectures become more complex and grow in size. Your developers are able to rapidly adopt new cloud architectures and capabilities, and you’ll find that static IaC templates do not scale to address the dynamic needs of modern cloud-native applications. Every application has different needs, and each application development team will inevitably alter the IaC template to fit the unique needs of that application. Cloud service provider capabilities change daily and make your IaC security template a depreciating asset that becomes stale quickly. A large investment in governance to scale is required for security teams, and it creates significant work for your SMEs to manage exceptions. Automation that relies on security as code offers a solution and enables your resource-constrained security teams to scale. In fact, it may be the only viable approach to address cloud-native security. It allows you to codify your design patterns and apply security dynamically to tailor to your application use-case. Managing your security design pattern using security as code has several benefits: Security teams do not need to become IaC experts. You get all the benefits of having a version-controlled, modular, and extensible way to build these design patterns. Security design patterns can evolve independently, allowing security teams to work autonomously. Security teams can use automation to engage early in the development process. The ratio of developers to ops to security resources is sometimes something like 100:10:1. I recently talked to an organization that has 10,000 developers and 3 AppSec engineers. The only viable way for a team like this to scale and prioritize their time efficiently is to rely on automation to force multiply their security expertise. Visibility and governance Once you reach sufficient maturity in your IaC adoption, you’ll want all changes to be made through code. This allows you to lock down other channels (that is, cloud console, CLIs) of change and build on good software development governance processes to ensure that every code change gets reviewed. Security automation that is seamlessly integrated into your development pipeline can now assess every change to your cloud-native apps and provide visibility into any potential inherent risks, avoiding time-consuming manual reviews. This lets you build mature governance processes that ensure security issues are remediated and compliance requirements are met. Drift detection Along your journey to IaC maturity, changes will be made to your cloud environment through IaC, as well as traditional channels such as the CSP console or command-line tools. When developers make direct changes to deployed environments, you lose visibility, and this can lead to significant risk. Additionally, your IaC will no longer represent your source of truth, so assessing your IaC can give you an incomplete picture. Investing in drift detection capabilities that validate your deployed environments against your IaC can ensure that any drift is immediately detected and remediated by pushing a code change to your IaC. Developer and security champions Security teams should put emphasis on the developer workflow and experience and seek to continuously reduce friction to implement security. Having developer champions within security that understand the challenges developers face can help ensure that security automation is serving the needs of the developer. Similarly, security champions within development teams can help generate awareness around security and create a positive feedback loop to help improve the design patterns. The bottom line IaC can be a risky business, but it doesn’t have to be. Higher velocity and more consistent deployments are in sight, as long as you’re able to invest in the right places. By being strategic and intentional and investing in the necessary areas, the security team at your organization will be best positioned to keep up with the fast and frequent changes during IaC adoption. Are you ready to take advantage of what IaC has to offer? There’s no better time than now. Aakash Shah is CTO and cofounder of oak9 DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,018
2,022
"Kubernetes challenges --- Isovalent brings secure connectivity, nabs funding | VentureBeat"
"https://venturebeat.com/security/kubernetes-day-2-challenges-isovalent-brings-secure-connectivity-nabs-funding"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kubernetes challenges — Isovalent brings secure connectivity, nabs funding Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There’s no question that Kubernetes has become the new enterprise standard when it comes to building and operating modern applications. According to the Cloud Native Computing Foundation’s (CNCF) annual survey, 96% of organizations are either using or evaluating the container orchestration system. As such, today’s enterprises and telcos are past the Day 1 phase of Kubernetes, said Dan Wendlandt, CEO of Isovalent. And, as they grow into the Day 2 phase, organizations are learning that Kubernetes does not, on its own, provide a networking layer with the security, observability, reliability and performance required of more mission-critical workloads, he pointed out. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This has pushed demand for open-source technologies — including Cilium and eBPF. To help meet these ever-increasing needs, Isovalent today announced that it has closed a $40M series B funding round. The company created the Cilium project and provides Isovalent Cilium Enterprise, technologies both enabled by the new Linux kernel technology eBPF. “eBPF is the single most interesting thing to happen in Linux in the past 10 or even 20 years,” said Wendlandt. And, while Isovalent started as an “all-in” bet on the technology and Kubernetes, “we are still in the early days of seeing all the ways in which Cilium and eBPF will transform the modern infrastructure layer.” Kubernetes Day 2 challenges “Which Kubernetes distro do I run?” “How do I migrate my initial applications onto Kubernetes?” Those are common Day 1 questions. But now that businesses have “figured out” how to run Kubernetes itself, they are tackling Day 2 challenges such as the following: “How do I troubleshoot connectivity failures or poor performance between two services running in Kubernetes?” “How does my security team perform an incident investigation in my Kubernetes environment?” Not only does Kubernetes not have built-in capabilities to tackle these problems, but traditional network infrastructure devices — firewalls, network load-balancers, network monitoring devices — are also limited in closing gaps, said Wendlandt. Such devices then become bottlenecks, given the explosion of API-communication between modern applications. Similarly, their focus on traditional packet-layer identity means they can’t understand service-identity and API-call details in modern workloads. Cilium addresses these challenges by providing a multicloud and on-premises connectivity fabric that is secure and observable. This runs directly in the Linux kernel alongside each application workload. “This technological leap enables Isovalent to provide rich context and insight for security and operator teams,” said Wendlandt. Making eBPF consumable eBPF, without a doubt, has fueled Cilum’s rapid rise, said Wendlandt. “eBPF essentially allows us to teach the Linux kernel new tricks,” he said. Without it, the networking stack within Linux is largely composed of code that hasn’t changed much in 20 years, he said, and that was designed in an era when Linux was either running on a standalone server or a network appliance connecting static services. The world looks “drastically different” when Linux is used as the foundation for Kubernetes infrastructure, Wendlandt said, with hundreds of containers running on each node and rapidly appearing and disappearing as workloads life-cycle via automated continuous integration/continuous delivery (CI/CD) pipelines. “eBPF allows us to teach Linux to identify and properly connect, load-balance, firewall, and monitor these containerized workloads in a way that would never be scalable or performant using the legacy Linux networking,” said Wendlandt. Still, he described it as a “very low-level technology.” Cilium’s open-source community ultimately makes eBPF consumable, he said. “Cilium provides a consistent way to connect, secure and observe workloads across any type of underlying multicloud infrastructure,” said Wendlandt. Meeting modern workload needs And Cilium continues to evolve. The technology initially focused on Kubernetes networking and security use cases such as connectivity, load-balancing and firewalling, said Wendlandt. But demand prompted expansion to network observability ( Hubble ), runtime security observability and enforcement ( Tetragon ) and Cilium Service Mesh. Organizations are also looking to use eBPF to measure and enforce software supply chain security and workload profiling. “It is really not an exaggeration to say that eBPF will change every aspect of how modern workloads run on any and all Linux platforms,” said Wendlandt. Wendlandt underscored the fact that Kubernetes promises consistency in life-cycle application workloads regardless of underlying infrastructure. Multicloud environments where workloads can seamlessly migrate isn’t “some pie-in-the-sky notion,” he said. “Rather, it is a realization that we are and will continue to be in a world of heterogeneous infrastructure, often comprised of a mix of private cloud and one or more public cloud providers,” he said. He also pointed out that enterprises, vendors, analysts and venture capitalists alike are struggling to define the new, emerging layer in the enterprise infrastructure stack. “As applications shift toward being a collection of API-driven services, the security, reliability, observability and performance of all applications becomes fundamentally dependent on this new connectivity layer,” said Wendlandt. The next step in the Kubernetes journey Since its introduction in 2018, Cilium has been selected as the default in several managed Kubernetes offerings of major public cloud providers: Google Kubernetes Engine, Google Anthos and Amazon EKS Anywhere. Rapid adoption of Cilium across many verticals — finance/payments, ecommerce/retail, insurance, telecommunications, government, data analytics, entertainment — “highlights the fact that we are solving a critical piece of the puzzle for users as they take the next step on their Kubernetes journey,” said Wendlandt. Furthermore, Cilium is one of the fastest-growing cloud-native connectivity projects in the Kubernetes ecosystem, he said, and it is the only Container Network Interface (CNI) at the incubation level in the CNCF. Its full “Graduated” project status is targeted for early 2023. Isovalent also co-maintains the eBPF codebase upstream in the Linux kernel, maintains ebpf.io, hosts the eBPF Summit, and helped create the eBPF Foundation along with Meta, Netflix, Google and Microsoft. The newest funding round was led by Thomvest Ventures, joined by Google, Cisco, Microsoft and Grafana Labs. Additional investors include Andreessen Horowitz, Mango Capital, and Mirae Asset Capital. The round will help Isovalent double its team — reaching roughly 100 employees — to continue supporting open-source communities while addressing demand for Cilium Enterprise, said Wendlandt. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,019
2,022
"3 cost trends in cloud computing today  | VentureBeat"
"https://venturebeat.com/data-infrastructure/3-cost-trends-in-cloud-computing-today"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community 3 cost trends in cloud computing today Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Application modernization efforts that support the aggressive rollout of digital strategies have paved the way for accelerated cloud adoption. This has resulted in enterprises spending a higher percentage of their IT budgets in the cloud. To scale the cost of doing business in the cloud effectively, enterprises need to understand the underlying reasons their costs are increasing. A study by Andreessen Horowitz found enterprises are typically spending 20% more on public-cloud infrastructure than expected. The unpredictability of enterprise cloud spend is driven by three key factors: more applications being delivered with multi-cloud, increased charges from cloud service providers (CSP) for pay-as-you-consume services, and cloud waste. The last three years have seen a tremendous rise in the use of cloud computing as companies have more fully embraced this technology to address the challenges of the global pandemic, including distributed workforces and the ever-expanding digital footprints needed to deliver better employee and customer experiences. The State of Multi-Cloud Infrastructure Report and Application eXperience Infrastructure Study (AXIS) found that spending on cloud accounted for 31% of overall IT budgets in the US last year. Similarly, International Data Corporation (IDC) showed that spending on cloud infrastructure increased 13.5% year over year in the fourth quarter of 2021 to $21.1 billion, marking the second consecutive quarter of year-over-year growth. IDC further predicted that by the end of 2022, cloud spending will outpace non-cloud IT infrastructure spending for the first time. As enterprises have shifted from short-term requirements focused on connectivity to digital strategies for long-term growth, there is an increasing focus on application modernization and increasing lift and shift strategies backended by the cloud. In fact, the AXIS study revealed that nearly half of enterprises anticipate more than 75% of their applications will be in the cloud within 12 months. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cloud costs rising due to global economic crisis In March, Google Cloud announced significant price increases across a number of core services under the guise of wanting to provide “more flexible pricing models and options.” However, all cloud service providers (CSPs) have increased prices to varying degrees. This can be attributed to the chip shortages that gained national headlines last year as well as the rising cost of goods due to supply chain issues and inflation that we’re all experiencing now. The war in Ukraine has only exacerbated the problem as Ukraine produces 70% of the world’s supply of neon gas used in semiconductor lithography. Hidden egress costs: a key challenge Another key challenge of managing cloud costs for enterprises are hidden egress costs. While most cloud providers don’t typically charge to transfer data into the cloud (“ingress”), they do charge for data egress in most situations. Data egress occurs whenever your applications write data out to your network or whenever you repatriate data back to your on-premises environment. In a recent conversation I had with a prominent industry analyst, he noted that he is receiving more and more calls from clients about cloud egress. Just how high are these costs? Let’s look at the National Aeronautics and Space Administration (NASA), which generates an incredible amount of data every year. An internal audit expects data collection to increase eight-fold by 2026 and expand to 247 petabytes. The audit concludes that fees from moving data from the cloud present “potential risks that scientific data may be less available” and warns that NASA may need to impose limits on the amount of data egress to control costs. Surprise data egress fees from multiple cloud providers can prevent enterprises from using the best cloud provider or, like NASA, from imposing data limits in an effort to reduce billing complexity. However, a single-cloud strategy can entail other risks, like vendor lock-in and missed innovation opportunities. Eliminate unnecessary cloud spend Another area that often creates challenges for enterprises comes from cloud shadows — the adoption of SaaS, IaaS and PaaS without IT’s knowledge. Similar to subscriptions in our personal life — streaming services, budgeting tools, gym memberships etc. — only when you see the bill do you realize you are paying for services you no longer use. The same holds true in business. Large enterprises have different teams using the cloud to build and test applications and put them into production. But who is watching to ensure these cloud environments get turned off when they are no longer in use after a test, or when an application becomes dormant because it is no longer needed or gets replaced? While some enterprises have standardized on one cloud service provider, it’s increasingly common that enterprises are embracing multi-cloud. In fact, a study by Flexera found that 92% of enterprises have done so to boost innovation and improve customer experience. However, different teams choose different CSPs based on personal preference or familiarity, and because they offer different features and are available in different cloud regions. This adds another layer of difficulty for enterprises as they work to track costs and budget accordingly. End-to-end visibility of cloud environment with ML critical to cost management While managing cloud costs may seem daunting, the solution is actually well within the reach of all organizations. The key is to develop a strategy and deploy tools that offer real-time, end-to-end visibility across your entire cloud environment with ML-delivered insights, recommendations and automation. Tools that provide end-to-end visibility enable enterprises to identify cloud egress costs, see which applications are underutilized or dormant and turn off cloud region instances that aren’t in use. Greater visibility is key to successfully migrating to the cloud and managing the enterprise cloud footprint. It allows enterprises to easily identify where it makes sense to spend more if performance improves and can provide greater ROI. For example: let’s say you open a new office in Australia. Turning on a cloud region close to the new office could deliver 50% better performance at a cost of $1,000 per month. This approach leads to better understanding of cloud usage and pattern matching, giving enterprises the ability to better manage cloud costs and predict future cloud spend. As more C-suites are asking how each department’s spend contributes to the success of the business, enterprises should seek cloud-agnostic vendors with tools that give them complete visibility into cloud cost and spend. Having a better understanding of the cloud environment only helps lines of business and IT leaders show how the cloud contributes to growth. Mehul Patel is Head of Marketing and Customer Insights and Intelligence at Prosimo. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,020
2,022
"How cross-functional, multidisciplinary teams can help you survive a recession  | VentureBeat"
"https://venturebeat.com/datadecisionmakers/how-cross-functional-multidisciplinary-teams-can-help-you-survive-recession"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How cross-functional, multidisciplinary teams can help you survive a recession Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Every week seems to bring bad news about the state of the global economy and its impact on the tech sector. We appear well on our way to a downturn. Every startup will experience this recession differently. Some, particularly those in spaces such as SaaS , may get through it relatively unscathed. Others, like certain ecommerce and speedy delivery sectors, look set to have a tougher time. What every prudent founder will be looking at is how their startup can best weather the storm and set itself up for success when things inevitably get better. The most obvious approach is to cut costs and increase efficiency. A no-brainer — but something that’s a lot easier to talk about than to effectively put into practice. Inevitably, many startups first look at cutting headcount to meet this goal. However, more often than not this does a lot more damage than good. Vital skills and knowledge are lost, morale is hit and customer service suffers. Instead, the answer could be found in a change of team structures, processes and approaches. A change that maximizes efficiency and resilience and promotes innovation. I am talking about the magic of cross-functional, multidisciplinary teams. This is probably a record-scratch moment where you look confused and wonder why I’ve started writing like a management consultant. Hear me out. Although this may sound like a load of random jargon it actually describes one of the best future business structures. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Multidisciplinary teams: Taking down the silos Generally speaking, businesses are divided up into departments. Marketers sit with marketers, and developers huddle with other developers. Marketing is in charge of marketing, the development team is in charge of, well, development. It’s nice and simple and has a lot of obvious advantages. Unfortunately, it also has some glaring problems that are becoming more and more apparent as technology, changing working practices and rising customer expectations increase the complexity of what many businesses do. I’ll give a few examples below, but they can be summarized as the siloing of knowledge — usually around data , and causing bottlenecks, single points of failure and reduced innovation. Multidisciplinary teams are, as the name suggests, departments made up of people with a wide variety of skills. Cross-functional means that responsibilities, knowledge and aims go right across the business. Let’s focus on marketing. The way businesses communicate has become incredibly complex — more channels, more tools, digital transformation, an unprecedented amount of data and higher expectations. Websites are expected to provide a host of personalized experiences. All of this requires a huge number of skills working in tandem: data science, security, IT, digital marketing, copywriting, customer service, development and much more. Collaboration, not conflict Juggling all of these different skills found in different departments with different goals leads to a lot of headaches and, in some cases, conflicts. Marketers make requests of developers to complete an action immediately but it falls to the back of the queue because the developers have their own priorities. Data scientists provide inputs that don’t include the commercial insights that marketers need for strategies. Everyone forgets to inform customer service about the new marketing campaign copy. And so on. It’s inefficient, error-prone and an ultimately unsustainable way for many startups to operate. You can see these problems every time you experience a slow-running, poorly functioning or outdated company website. It was also readily apparent at the start of the pandemic as many companies struggled to switch their offerings online. A number of them — even some global companies — found they relied on one or two individuals (who were now absent with COVID-19) to manage website updates. They couldn’t put critical customer information online or even begin to create a new online channel for sales. Marketing is just the most obvious example; siloed teams impact everything from critical business decision-making — that is, the best infrastructure and tools to adopt — to sales, product development and commercial strategy. A multidisciplinary, cross-functional team A truly multidisciplinary, cross-functional marketing team includes all the skills you need to execute any project. This doesn’t mean splitting up the whole department into fixed smaller teams; it means allowing them to work cross-functionally on one project. Everyone works together and shares the same goals. Skills run in a continuum — data scientists know a bit about marketing, marketers know a bit about development. Information, insights and knowledge generated in the marketing team flow out to every other multidisciplinary department and vice versa. But wait a minute — weren’t we talking about surviving and thriving in a recession? This sounds expensive and disruptive, right? Well, no, not really. Certainly, if you intended to upend your entire business tomorrow and reorganize everything and everyone into a big multidisciplinary melting pot you would probably do more harm than good. I’m not advocating that. What I believe will work for a lot of startups is an incremental approach that focuses as much on the philosophy as it does on the practicalities. After all, building multidisciplinary teams involves a lot of best-practice measures that have their own wider benefits. How you can get started Every startup will be different, but there are some broad rules of thumb to follow to get started: Get your data flowing: Many startups big and small have information held in silos. Auditing your data — where it is held, who has responsibility for collecting, managing and analyzing it, where it is shared and how it is used — is the first step. Ensuring you have the tech and procedures to make it accessible across the business comes next. Building up the skills across your whole team to generate insights is the cherry on top. Break down those walls: Even the smallest startups can suffer with different teams operating in a quasi-rivalry with one another. More often than not it’s a structural issue. Priorities and goals across departments are not shared — except maybe a brief mention at an all-hands meeting. Actively encouraging and creating forums where different departments consistently meet to collaborate on and share problems and successes can be the easiest way to get started on closer integration and cooperation. Education, education, education: Teaching your team new skills can be the single most powerful initiative. It increases productivity, builds resilience and can really help speed up the integration process. But it doesn’t just happen naturally. You need to proactively upskill your team in a structured and targeted way. Identifying the key skills needed, who is best equipped to acquire them and, crucially, creating an environment where they can be applied immediately means developing a comprehensive training program. Make technology an enabler: It’s crazy how many companies have a tech stack that’s largely inaccessible or inappropriate outside the department it was originally commissioned for. Worse is when IT, for example, makes all procurement decisions and imposes them. Your tech stack can and should underpin cross-functional collaboration. It is the key to free data and information flow. Surveying you team’s sentiment to identify problems and opportunities will inform you how you can make quick improvements. Ultimately, you want to get to a point where you don’t need power users to get the most out of your stack. Talk the talk and walk the walk: Your team will take a cue from you and your leadership team. If you want to get buy-in for this approach you all need to get stuck in. This means being more transparent with management decisions and getting more involved and knowledgeable about how departments work in practice. This is much more than receiving activity updates. It means really understanding what everyone does on a day-to-day basis. If your CFO can tell you what Dave in development is working on — you know you’ve made it. Start small but think big: Any one of the above actions would see a big ROI for your startup. The key is to remember that at some point you will need to build fully multidisciplinary teams. Piloting one department, continually monitoring results and learning from mistakes will help you to eventually roll it out across your business. Don’t rush — costs will take care of themselves and risk will be mitigated by moving in a thoughtful and strategic manner. Remember, you need your team to be sold on the idea, or it will struggle to work. There’s no escaping the fact that change can be hard. I imagine many people reading this will shrug their shoulders and think they already do this. However, there’s a big difference between having the veneer of a collaborative startup and having the procedures, skills, mentality and infrastructure that make it a reality. The virtue of starting this journey now is that not only will it help you ride out the present recession, it will also future-proof your business for the technological, economic and customer challenges the next few years will bring. Dominik Angerer is CEO and cofounder of enterprise CMS Storyblok. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,021
2,014
"Meet Terraform, a simple command-line tool to manage all your cloud infrastructure | VentureBeat"
"https://venturebeat.com/business/terraform-hashicorp"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meet Terraform, a simple command-line tool to manage all your cloud infrastructure Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Mitchell Hashimoto, the guy who created the popular Vagrant tool for setting up development environments, has gone and built another useful tool for developers working on public-cloud platforms. No longer must developers sign in to one or more online portals and hit a bunch of buttons to load up or adjust the public-cloud infrastructure they use. With Terraform , that work can happen right from a developer’s command line. “With Terraform, you describe your complete infrastructure as code, even as it spans multiple service providers,” Hashimoto’s company, HashiCorp, wrote in a blog post today announcing the new tool. Terraform already supports and can manage physical servers, virtual machines and application containers on a wide range of clouds, including Amazon Web Services, Heroku, Google Compute Engine, DigitalOcean, and Cloud Foundry. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! From the beginning, then, Terraform could be useful for developers at companies who want to create or manage IT architectures across multiple clouds. And that right there is what separates it from some tools that already exist for manipulating cloud infrastructure. The Amazon cloud’s CloudFormation tool, for instance, can only handle Amazon infrastructure. Meanwhile, Terraform can work in lock step with existing configuration-management tools like Puppet and Chef. It’s not a competitor of those widely used services. Mostly cast in the increasingly popular Go programming language, Terraform can come in handy for several purposes. For instance, launching the infrastructure for a demonstration of software becomes faster. From the Terraform site: Software writers can provide a Terraform configuration to create, provision and bootstrap a demo on cloud providers like AWS. This allows end users to easily demo the software on their own infrastructure, and even enables tweaking parameters like cluster size to more rigorously test tools at any scale. Keep an eye on Terraform. If Vagrant’s success is any indicator, it could end up becoming a common element of many developers’ tool sets. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,022
2,022
"Google launches vulnerability reward program to secure open-source software  | VentureBeat"
"https://venturebeat.com/security/google-vulnerability-reward-program"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google launches vulnerability reward program to secure open-source software Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Open-source software security is in need of a massive overhaul. So many organizations rely on open-source software to fulfill critical services and operations, but have next to no control over how these components are maintained. For this reason, more and more private organizations are stepping up to the plate to help identify and fix vulnerabilities before attackers can exploit them. Just today, Google announced the launch of the Open Source Software Vulnerability Rewards Program (OSS VRP), which offers rewards of up to $31,337 for researchers who can find bugs in the open-source ecosystem. The launch highlights that a crowdsourced approach to security has the potential to mitigate vulnerabilities in widely used (but traditionally underfunded and under-maintained) open-source projects, and eliminate potential entry points into enterprise environments. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Restoring confidence in the software supply chain The release of the OSS VRP comes as anxiety over attacks on the software supply chain has reached an all-time high, following the discovery of zero-day vulnerabilities like Log4j and Log4Shell and monumental data breaches impacting providers including SolarWinds and Codecov. This anxiety was well-founded, as threat actors were also actively looking to target vulnerabilities in the software supply chain, with attacks targeting the open-source software supply chain increasing 650% between 2020 and 2021. When combined together, these factors have severely impacted confidence in the security of open-source software. Research shows that 41% of organizations don’t have high confidence in their open-source software security. However, providers like Google are aiming to restore confidence in the software supply chain by financially incentivising researchers to identify and fix vulnerabilities. “Google develops and maintains more than ten thousand open source projects. Many of these projects are used extensively in critical infrastructure (e.g. Golang, Tensorflow). Finding and fixing vulnerabilities in these critical projects will help improve the security posture of the open source ecosystem and other user,” said Open Source Security Technical Program Manager, Francis Perron. As part of the new initiative, researchers will receive a payout according to the severity of the vulnerability discovered, with the biggest rewards going to those who discover vulnerabilities found in sensitive projects such as Bazel , Angular , Golang , Protocol buffers and Fuchsia. It’s worth noting that this announcement comes hot on the heels of Google’s participation in the NIST/NSF/OMB’s U.S. Open-Source Software Security Initiative Workshop and will help it work toward fulfilling the organization’s $10 billion commitment to improving cybersecurity. The wider open-source security landscape Google isn’t the only organization looking to play a greater role in defining open source security. Earlier this year, at the White House Open Source Security Summit II organized by the Linux Foundation and the Open Source Software Security Foundation (OpenSSF), 90 executives from 37 companies came together to discuss how to secure the open-source supply chain. At the event, providers including Amazon, Microsoft, Ericsson, Intel, VMware and Google pledged to contribute over $30 million collectively to enhance the security of open-source software. At this moment, Microsoft is offering consulting services for the OSS SSC Framework , to help organizations establish a governance program to manage the use of open-source software, yet there is a limited amount of bug bounty programs focused on open-source projects rather than closed product ecosystems. The most comparable initiative is HackerOne’s bug bounty program , which rewards researchers for discovering vulnerabilities impacting open-source software projects and offers an average bounty of $500. Going forward, we can expect to see more vulnerability disclosure and bug bounty programs come to light as more organizations recognize the value of crowdsource security in reducing the risks of open-source software. Google launches vulnerability reward program to secure open-source software Open-source software security is in need of a massive overhaul. So many organizations rely on open-source software to fulfill critical services and operations, but have next to no control over how these components are maintained. For this reason more and more private organizations are stepping up to the plate to help identify and fix vulnerabilities before attackers can exploit them. Just today, Google announced the launch of the Open Source Software Vulnerability Rewards Program (OSS VRP), which offers rewards of up to $31,337 for researchers who can find bugs in the open-source ecosystem. The launch highlights that a crowdsourced approach to security has the potential to mitigate vulnerabilities in widely used (but traditionally underfunded and under maintained) open-source projects, and eliminate potential entry points into enterprise environments. Restoring confidence in the software supply chain The release of the OSS VRP comes as anxiety over attacks on the software supply chain has reached an all-time high, following the discovery of zero-day vulnerabilities like Log4j and Log4Shell and monumental data breaches impacting providers including SolarWinds and Codecov. This anxiety was well-founded, as threat actors were also actively looking to target vulnerabilities in the software supply chain, with attacks targeting the open-source software supply chain increasing 650% between 2020 and 2021. When combined together, these factors have severely impacted confidence in the security of open-source software. Research shows that 41% of organizations don’t have high confidence in their open-source software security. However, providers like Google are aiming to restore confidence in the software supply chain by financially incentivizing researchers to identify and fix vulnerabilities. As part of the new initiative, researchers will receive a payout according to the severity of the vulnerability discovered, with the biggest rewards going to those who discover vulnerabilities found in sensitive projects such as Bazel , Angular , Golang , Protocol buffers and Fuchsia. It’s worth noting that this announcement comes hot on the heels of Google’s participation in the NIST/NSF/OMB’s U.S. Open-Source Software Security Initiative Workshop , and will help it work toward fulfilling the organization’s $10 billion commitment to improving cybersecurity. The wider open-source security landscape Google isn’t the only organization looking to play a greater role in defining open-source security. Earlier this year, at the White House Open Source Security Summit II organized by the Linux Foundation and the Open Source Software Security Foundation (OpenSSF), 90 executives from 37 companies came together to discuss how to secure the open-source supply chain. At the event, providers including Amazon, Microsoft, Ericsson, Intel, VMware and Google pledged to contribute over $30 million collectively to enhance the security of open-source software. Currently, Microsoft is offering consulting services for the OSS SSC Framework , to help organizations establish a governance program to manage the use of open-source software, yet there is a limited amount of bug bounty programs focused on open-source projects rather than closed product ecosystems. The most comparable initiative is HackerOne’s bug bounty program , which rewards researchers for discovering vulnerabilities impacting open-source software projects and offers an average bounty of $500. Going forward, we can expect to see more vulnerability disclosure and bug bounty programs come to light as more organizations recognize the value of crowdsource security in reducing the risks of open-source software. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,023
2,022
"Open source security gets a boost with new scorecard and best practices | VentureBeat"
"https://venturebeat.com/security/openssf-new-scorecard-best-practices"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Open source security gets a boost with new scorecard and best practices Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There is no shortage of challenges when it comes to securing open source software and no shortage of ideas for how to mitigate risks. It is the stated mission of the OpenSSF (Open Source Security Foundation ) to help improve the state of open source security, and that is precisely what it is doing. The OpenSSF is part of the Linux Foundation and has multiple ongoing efforts across different aspects of the software development lifecycle. On September 7, 2022 the organization announced the latest iteration of its Scorecards effort, an initiative designed to help open source projects and their users identify the state of security within a project. The updated scorecards come a week after the OpenSSF issued new guidance and best practices on how to secure npm , which is a widely used, and often abused, open source package management system for JavaScript. Easier access for open source security scorecards The OpenSSF has its roots in a predecessor effort from the Linux Foundation, known as the Core Infrastructure Initiative (CII), which is where the concept of best practices badges for open source projects was introduced in 2015. The badge projects became part of the OpenSSF’s Scorecards effort in 2020. With security scorecards, anyone can run a scan against an open source code repository and automatically identify the general state of security. Badges enable an open source project to easily publicly display scorecard results showing the state of best practices. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With the new version of scorecard badges, the OpenSSF is looking to make it easier to share and more broadly access scorecard information with a programmatic approach. There is now a REST API that can enable anyone to get a data stream of access to the scorecard information that can then be used for analytics and trend analysis. “Up until now, anybody could download the scorecard tool and run it, but now they don’t have to run it to get all the information,” David Wheeler, director of open source supply chain security at the Linux Foundation, told VentureBeat. Best practices for npm might be obvious, but still important Looking beyond scorecards, the OpenSSF has taken aim at providing very specific guidance to help npm users and developers be more secure. Finding malware in npm libraries is not uncommon. Among the high-profile security incidents with npm was one in 2021 that the U.S Cybersecurity and Infrastructure Security Agency warned about in an advisory. Wheeler noted that the best practices guide doesn’t necessarily introduce any new concepts to open source security; rather, it reinforces ideas and approaches that are well known to help mitigate risk — if only users and developers would implement them. “For the most part the things in the guide were known by many people that have been involved with npm for a long time,” Wheeler said. “But no one knows everything, and a number of folks knew something, but that doesn’t mean the knowledge is universal.” One of the best practices identified in the report is to avoid vendor dependencies. Wheeler explained that a vendor dependency is a risk that occurs when a software developer makes a local copy of an npm library. The challenge is that the local copy isn’t by default being updated when the original vendor or developer of the software makes a change, which could well be to patch a software flaw or vulnerability. Wheeler emphasized that vendor dependency risk is not unique to npm, but rather a broader issue across open source software usage. He explained that historically it wasn’t easy for developers to access the original, upstream software code and that’s why it became a common practice to make a local copy. With modern code repositories, such as GitHub, Wheeler said that’s no longer the case and developers no longer need to make local copies that are completely disconnected from the main codebase. Another best practice for npm that the OpenSSF guide advocates is to embrace the concept of least privilege. The idea behind least privilege is to provide only the minimum required amount of access to an application in order to minimize the potential attack surface. That also involves not including unnecessary access credentials and permissions in code or an npm component. While the best practices guide for npm is the first such guide from OpenSSF, Wheeler expects that more guides for other critical open source projects will emerge in the future. “Npm is widely used and as soon as you get on the web you often end up using the npm ecosystem to some extent, even if the code in backend is in Python, Ruby or a different language,” Wheeler said. “I think it was important that we prioritize npm, but this is not the last guide and we’re very much interested in having guidance for other situations.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,024
2,021
"Graph database company Neo4j launches free fully-managed cloud service | VentureBeat"
"https://venturebeat.com/apps/graph-database-company-neo4j-launches-free-fully-managed-cloud-service"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Graph database company Neo4j launches free fully-managed cloud service Share on Facebook Share on X Share on LinkedIn Neo4j Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. Neo4j , the company behind the eponymous open source graph database , has officially launched a completely free version of its fully-managed cloud service. The San Mateo, California-based company first debuted Neo4j AuraDB Free as part of an early access program back in June , but today it launches into general availability for everyone. Connecting the dots Graph databases power core functionality in many modern applications and connect the dots between disparate pieces of data that are not so obviously related — it’s the technology that enables Facebook to make friend recommendations and for cybersecurity software to identify threats. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Neo4j: Drawing connections between movie data It has been a big year for graph database platforms, with TigerGraph scooping up $105 million, ArangoDB raising $27.8 million , and Neo4j itself securing $320 million at a $2 billion valuation. Neo4J has so far offered a range of pricing options, spanning both hosted and self-hosted. After launching its AuraDB enterprise product back in January, the company went about creating a special version of its premium hosted product aimed at smaller development projects. While one of the benefits of open source projects is that developers are free to use and deploy a product as they see fit, not all developers want the hassles or costs involved in self-hosting — which is where AuraDB Free comes into play. Above: Neo4j: Create a new database It perhaps goes without saying that AuraDB Free has major limitations in terms of the number and size of databases it supports, and it is not designed for enterprise-grade projects — but it is completely free, with no trial period, time limits, and no credit card is required. And importantly, it sports most of the platform’s core functionality and developer tools, including data visualizations. Ultimately, this is designed to lure independent developers onto the Neo4j platform early in the process, where they can later upgrade as their project grows. Alternatively, developers in enterprise settings might want to dabble with Neo4j to learn more about how it works or put it to use on prototype applications. However developers decide to use the free hosted version of Neo4j, it all amounts to the same thing — encouraging uptake in the increasingly competitive data landscape. “Some of the most innovative applications of Neo4j have come from our community, and we’re hoping AuraDB Free will further empower them and reduce friction to accelerate these ‘aha’ moments,” Neo4j’s CEO and cofounder Emil Eifrem noted in a press release. “Today, we’re stepping up to offer graph developers the easiest way to learn, test, and grow with us in the cloud.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,025
2,021
"Graph database platform Neo4j raises $325M to inform decision-making | VentureBeat"
"https://venturebeat.com/business/graph-database-platform-neo4j-raises-320m-to-inform-decision-making"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Graph database platform Neo4j raises $325M to inform decision-making Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Graph platform Neo4j today announced that it raised $325 million at an over $2 billion valuation in a series F round led by Eurazeo, with additional investment from GV. The capital, which brings the company’s total raised to date to over $500 million, will be put toward expanding Neo4j’s platform, workforce, and customer base, the company says. Markets and Markets anticipates the graph database market will reach $2.4 billion by 2023 from $821.8 million in 2018. And analysts at Gartner expect that enterprise graph processing and graph databases will grow 100% annually through 2022, facilitating decision-making in 30% of organizations by 2023. Graph databases and graph-oriented databases leverage graph structures for semantic queries, with nodes, edges, and properties that store and represent data. They’re a type of non-relational technology that depicts the relationships connecting various entities — like two people in a social network, for instance — and that can analyze interconnected data. Neo4j offers an open source NoSQL graph database written in Java and Scala with a declarative query language called Cypher. It supports a number of applications, including identity and access management, knowledge graph augmentation, and network and database infrastructure monitoring, as well as risk reporting compliance and social media graphs. Neo4j’s founders encountered performance problems with relational database management systems, which inspired their decision to build the first Neo4j prototype. Emil Eifrem, the founder and CEO of the company, sketched what today is known as the property graph model on an airplane napkin during a flight to Mumbai in 2000. A property graph is a type of graph where relationships are not only connections but carry a name and some properties. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Neo4j has been downloaded more than 120 million times by over 200 million developers, more than 50,000 of which are trained. Our main competition is legacy SQL systems that are bogged down by low-performance queries,” Eifrem told VentureBeat via email. “We see competition as a good thing, as smaller companies tend to stake out market niches that might go unidentified by the larger leaders. Competition fuels innovation, as it motivates every vendor to be better, and that’s good news for customers. ” On the backend Neo4j features constant time traversals that can scale up to billions of nodes, a flexible property graph schema that adapts over time, and drivers for popular programming languages like JavaScript, .NET, Go, and Python. It’s compliant with ACID (atomicity, consistency, isolation, and durability) requirements, meaning it guarantees database transactions even in the event of power failures and errors. And on the AI front, it supports high-performance graph queries on large datasets. Above: An example of a graph database created with the Neo4j platform. Development on Neo4j began in 2003, and it’s been publicly available since 2007 in two editions: a free Community edition and an Enterprise edition. The Enterprise edition adds hot backups, parallel graph algorithms, LDAP and active directory integration, multi-clustering, larger graphs, and more. “Graph technologies are a purpose-built method for adding and leveraging context from data and are increasingly integrated with machine learning and AI solutions in order to add contextual information … Graphs also serve as a source of truth for AI-related data and components for greater reliability. This is especially important for AI bias. Providing these context and connections to AI systems to have more situationally appropriate outcomes mirrors the decisions in the same way humans do,” Eifrem said. “Graphs can also greatly increase the accuracy of machine learning models with the data you already have. Graphs increase the dimensionality of your data by adding relationships which we know are highly predictive of behavior.” Graph database growth Gartner predicts that graph processing and graph databases “will grow at 100% annually over the next few years to accelerate data preparation and enable more complex and adaptive data [analytics].” In a Neo Technology survey conducted by Evans Data Corporation, 49% of companies said that they anticipate taking on real-time recommendations through graph databases in the next two years. Fifty-eight percent said that they’re already using graph databases at scale. Data analytics is the science of analyzing raw data to extract meaningful insights. A range of organizations can use data to boost their marketing strategies, increase their bottom line, personalize their content, and better understand their customers. Businesses that use big data increase their profits by an average of 8%, according to a survey conducted by BARC. Startups like TigerGraph , MongoDB, Cambridge Semantics, DataStax, and others compete with Neo4j in a graph database market expected to be worth $2.4 billion by 2023, in addition to incumbents like Microsoft and Oracle. Even Amazon threw its hat in the graph database ring in November 2017 with the launch of Neptune , a fully managed graph database powered by its Amazon Web Services division. But Neo4j — which has over 500 employees — has achieved a few pretty impressive milestones, including more than 3 million downloads as of November 2018 and over 300 enterprise subscription users. The company counts among its current and previous customers Lyft, Walmart, eBay, Adobe, Orange, Monsanto, IBM, Microsoft, Cisco, Medium, Airbnb, NASA, and the U.S. Army. Neo4j customer Meredith Corporation says it scaled its Neo4j graph to analyze 30 billion nodes of digital traffic and has tested capacity to accommodate 100 billion in the future. Recently, Neo4j itself demonstrated real-time query performance against a graph with over 200 billion nodes and more than a trillion relationships running on over a thousand machines. Last year, Neo4j introduced Neo4j for Graph Data Science, which the company claims is the first data science environment built to harness the predictive power of relationships for scenarios like fraud detection, customer and patient journey tracking, and drug discovery. It arrived alongside Neo4j Aura Professional on Google Cloud Platform , a fully integrated graph database service on the Google Cloud Marketplace designed for small and medium-size businesses. Neo4j also recently debuted the Neo4j BI Connector, which presents live graph datasets for analysis within popular business intelligence technologies including Tableau and Looker. And the company rolled out the Neo4j Connector for Apache Spark, an integration tool to move data bi-directionally between the Neo4j Graph Platform and Apache Spark. In addition to Eurazeo and GV, Creandum also participated in San Mateo, California-based Neo4j’s latest fundraising round, as did Greenbridge Partners, DTCO, Lightrock, and One Peak Partners. Neo4j previously closed a $40 million venture round led by One Peak. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,026
2,021
"What are graph database query languages? | VentureBeat"
"https://venturebeat.com/business/what-are-graph-database-query-languages"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What are graph database query languages? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A new generation of graph databases has taken hold, and a generation of query languages has arrived alongside them. The assorted graph database query languages include the likes of Gremlin, Cypher, and GQL and serve to unpack the information inside graphs. All databases need a way to talk with their clients, and the query languages they speak define what the database can do. Good graph database query languages unlock the power of graph databases by making it possible — and sometimes easy — for developers to ask complex questions about the networks defined in the databases. In the beginning, the languages were proprietary and invented for each new database, but there has been a recent push to create open standards. In the world of relational databases, SQL (structured query language) has been the dominant standard for years. It defines a way to search for the rows in a table that match specific criteria. If the data spans several tables, it offers a way to align the tables so all the information is joined together in one consistent collection. It’s good at finding a particular set of entries with a particular field that matches some rule, but it doesn’t do much more than that. Classic relational databases can store graphs, and before graph databases it was common for developers to use them because they were the only option. SQL can answer basic questions, but traditional query languages generally can’t answer the most useful and tantalizing questions. Ironically, perhaps, relational databases are not nearly as good at representing very complex relations as graph databases are. Often, the only solution for a relational database query is to return large blocks of data so the client software can run the analysis. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Graph query languages were created to answer more complex questions like: In a family tree, how many second cousins does a person have? In a social media graph recording friends or followers, how many degrees of separation are there between two users? In a graph of a company’s supply chain, what is the longest number of hops between the factory and a customer? In a collection of banking transactions, are there some people who are connected to an above-average number of fraudulent transactions? In a computer network, where can a new connection with higher bandwidth fix a bottleneck? The graph databases require different models because the analysis must go deeper than the basic relations that can be stored in tables. Some queries require following several links or hops before calculating certain statistics. In the beginning, each graph database created a proprietary query language. Lately, the graph database companies have been cross-pollinating by adding new implementations and working toward an open source standard. The most common graph query languages are: Gremlin — A graph searching language originally developed for the Apache Tinkerpop project that allows procedural or declarative queries. Cypher — First created by Neo4j and later adopted by others as OpenCypher, this declarative language allows searching for nodes and edges that match particular properties. GQL — This proposed standard attempts to unify the styles of Cypher, GSQL, and PSQL. SPARQL — A standard developed for querying knowledge graphs stored in the RDF format. PGQL — Oracle’s original language for searching and collecting information from nodes that match specifications. GSQL — TigerGraph’s original procedural language. AQL — ArangoDB’s original procedural language. GraphQL — Although the name suggests it supports graph querying, this is a more general query language for efficiently searching most document and relational databases. It is finding some uses with graph databases, but only for supporting the same general queries as it does with relational databases. There are a number of major differences between the query languages. Some are said to be “declarative,” while others are “procedural.” That is, some let the developer declare what they want by writing simple rules for defining a subset. The database takes the rules, constructs a search plan using any available indices and then finds all potential matches. One might ask to find all bank transactions over $10,000 that are within 10 miles of each other. Another might search for all social media users who are connected to each other and haven’t posted in two weeks. The rules can include all of the filtering on values found in standard query languages (“WHERE AGE<20”), as well as other more complex rules about the network of connections (“IS RELATED TO”). In general, the graph query languages are most successful when they search through the graph of relationships. The procedural versions come closer to traditional computer languages by allowing the developer to control how the database searches through the items, often by writing loops or other control structures. In general, declarative languages are easier to understand and use because they hide much of the work of searching, but procedural languages are more powerful. Some databases offer a combination of both. Another major difference comes from the structure of the database itself. Some support the RDF model, while others support so-called property graphs. The RDF model is a W3C standard first designed to encode semantic information. Property graph models tend to be more general and flexible, and some databases support both models. How do legacy players approach graph query languages? Oracle implemented graph capabilities to its main database by adding graph searching functions to its regular SQL query language. Extensions called PGQL ( Property Graph Query Language ) offer a concise way to search graphs and create reports about nodes that match criteria. Their graph analytics framework starts with dozens of common algorithms that can be extended to build complex summaries of the underlying data. They support both property graphs and RDF-style graphs. Microsoft added graph capabilities to SQL Server in 2017 and extended its version of SQL with a MATCH clause that matches property patterns. The searching can be extended with stored procedures for imperative queries. Microsoft’s Cosmos database in the Azure cloud supports Apache TinkerPop API, and thus all Gremlin-style queries. Amazon’s main graph database — AWS Neptune — supports both property graphs and RDF-style graphs. The property graphs can be searched with Gremlin-style queries, while SPARQL is used for the RDF-style graphs. IBM has been working with a number of graph databases, like Neo4j, and also offering its own product as a service in its cloud. The service, called IBM Graph , uses the TinkerPop API with Gremlin, as well as a simpler API for basic retrieval. How are the upstarts responding? Neo4J has in recent years become one of the most influential graph databases, and it remains a leader in the field. But it remains a separate company and so is grouped here with the upstarts. In fact, several of the graph database players are of long lineage. Neo4j has vigorously encouraged other companies to use its query language, Cypher , via the openCypher project. Neo4j is also a big supporter of the GQL standardization process, and the company supports GraphQL for some queries. TigerGraph stores property graphs and queries them with GSQL , a procedural approach that simplifies parallel processing for scaling to larger datasets. The company behind that database offers a sophisticated visual tool for exploring and querying the dataset. Called GraphStudio, it is available as both a product and a cloud service. OrientDB is an open source database that uses Gremlin and SQL for querying. It was built by a company that was purchased by SAP, which is now integrating it with the SAP product line. ArangoDB is designed to support both graph and NoSQL document datasets. The open source database is available as both a community edition and a commercial version that can be purchased as a service. Its associated query language, known as AQL , offers a procedural approach to searching through the data. AllegoGraph stores RDF-style graphs that can be queried with SPARQL and RDFS++, as well as with programming language extensions like Prolog, a logic programming language, and Allegro Common LISP. Their knowledge graph explorer, Gruff , runs in browsers for visual querying. The product is available for local installation and in clouds like AWS. Ontotext is focused on creating big knowledge graphs , and it’s GraphDB supports SPARQL queries for RDF-style graphs. Ontotext offers three versions (Free, Standard, and Enterprise) with most of the same features, although the free version is limited to two concurrent queries. Is there anything that graph database query languages can’t do? The graph query languages can offer a concise way to search for particular combinations of entries that fit specific patterns. Some questions, however well-specified, can be difficult to answer in an efficient way. Certain graph problems, like finding subsets of highly connected nodes called cliques, fall into a class known as NP-complete and may be difficult to solve efficiently. The answers may take exponentially longer to find as the size of the problem grows — in other words, these won’t scale. And it can be dangerously simple to write a query that will take a very long time to solve. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,027
2,022
"What do graph database benchmarks mean for enterprises? | VentureBeat"
"https://venturebeat.com/business/what-do-graph-database-benchmarks-mean-for-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What do graph database benchmarks mean for enterprises? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Graph databases are playing a growing role in improving fraud detection, recommendation engines, lead prioritization, digital twins and old-fashioned analytics. But they suffer performance, scalability and reliability issues compared to traditional databases. Emerging graph database benchmarks are already helping to overcome these hurdles. For example, TigerGraph recently used these benchmarks to scale its database to support 30 terabytes ( TB) of graph data, up from 1 TB in 2019 and 5 TB in 2020. David Ronald, director of product marketing at TigerGraph, told VentureBeat that TigerGraph uses the LDBC benchmarks to check its engine performance and storage footprint after each release. If it sees a degradation, the results help it figure out where to look for problems. The TigerGraph team also collaborates with hardware vendors to run benchmarks on their hardware. This is important, particularly as enterprises look for ways to operationalize the data currently tucked away across databases, data warehouses and data lakes that represent entities called vertices and the connections between them called edges. “With the ongoing digital transformation, more and more enterprises have hundreds of billions of vertices and hundreds of billions of edges,” Ronald said. Dawn of graph benchmarks The European Union tasked researchers with forming the Linked Data Benchmark Council (LDBC) to evaluate graph databases’ performance for essential tasks to address these limitations. These benchmarks help graph database vendors identify weaknesses in their current architectures, identify problems in how they implement queries and scale to solve common business problems. They can also help enterprises vet the performance of databases in a way that is relevant to common business problems they want to address. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Peter Boncz, professor at Vrije Universiteit and founder of the LDBC, told VentureBeat these benchmarks help systems achieve and maintain performance. LDBC members include leading graph database vendors like TigerGraph, Neo4J, Oracle, AWS and Ant Group. These companies use the benchmarks continuously as an internal test for their systems. The benchmarks also point to difficult areas, like path finding in graphs, pattern matching in graphs, join ordering and query optimization. “To do well on these benchmarks, systems need to adopt at least state of the art in these areas if not extend state of the art,” Boncz said. Boncz has also seen various other benefits arise from LDBC cooperation. For example, LDBC collaboration has helped drive standardization of the graph data model and query languages. This standardization helps ease the definition of benchmarks and is valuable to users and accelerates the field’s maturity. LDBC members also venture beyond benchmarking to start task forces in graph schema languages and graph query languages. The LBDC has also begun collaborating with the ISO working group for the SQL standard. As a result of these efforts, Boncz expects the updated SQL:2023 standards to include graph query functionality (SQL/PGQ – Property Graph Query) and the release of an entirely new standard graph query language called GQL. Types of benchmarks The LDBC has developed three types of benchmarks for various use cases: The Social Networking Benchmark (SNB) suite is the most directly applicable to common enterprise use cases. It targets common graph database management systems and supports both interactive and business intelligence workloads. It mimics the kinds of analytics enterprises might do with fraud detection, product recommendations, and lead generation algorithms. The largest SNB dataset at Scale Factor 30k, involves processing 36 TB of data with 72.6 billion vertices and 533.5 billion edges. The Graphalytics benchmark is an industrial-grade benchmark for graph analysis. This benchmark can test datasets with up to 100 million vertices and 9.4 billion edges. These are good for measuring classic graph algorithms such as page rank and community detection. The machine learning and AI community are adopting it to improve model accuracy. The Semantic Publishing Benchmark uses an older web data schema called RDF. It is based on a use case from the BBC, an early adopter of RDF. “Most graph system growth has been around the property graph data model, not RDF,” Boncz said. As a result, the Social SNB aimed at property graph data has received considerably more attention. Plan for real-world use cases Graph databases are a great tool for helping vendors to improve their tools and for enterprises to assess the veracity of vendor claims using an apples-to-apples comparison. “But raw performance doesn’t tell the whole story of any technology, particularly in the granular world of graph databases,” said Greg Seaton, VP of Product at Fluree, a blockchain graph database. For example, small to medium enterprises may not need to regularly process millions of graph structures, called triples, every second. They may see greater benefit from advanced value add features like transaction blockchains, level-2 off-chain storage, non-repudiation of data, interoperability, standards support, provenance and time-travel query capabilities, which require more processing than just straight graph, relational or other NoSQL stores. As long as the performance of the graph storage platform is right sized for the enterprise, and the capabilities also fit the needs of that enterprise, performance past a certain point, although nice to have, is not as crucial as that fit. Seaton said, “Not every graph database has to be a Formula One race car. There are many industry needs and domain use cases that are better served by trucks and panel vans with the features and functionality to support necessary enterprise operations.” Prepping for graph data Machine learning and database benchmarks have played a tremendous role in shaping those tools. Graph database experts hope that better benchmarks could play a similar role in the evolution of graph databases. Ronald sees a need for more graph database benchmarks in verticals. For example, there are many interesting query patterns in the financial sector that the LDBC-SNB benchmark has not captured. “We hope there will be more benchmark studies in the future, as this will result in greater awareness of the relative merits of different graph databases and accelerated adoption of graph technology,” he said. Boncz wants to see more audited benchmark results for the existing Social Network Benchmark. The LDBC has shown interesting results for the Interactive Workload benchmark. The LDBC is now finishing a second benchmark for Business Intelligence Workloads. Boncz suggested interested parties check out the upcoming LDBC Technical User Community meeting coinciding with the ACM SIGMOD 2022 conference in Philadelphia. “These events are perfect places to provide feedback on the benchmarks and learn about the new trends,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,028
2,023
"Achieving electronic engineering efficiency through ML and automation | VentureBeat"
"https://venturebeat.com/ai/achieving-electronic-engineering-efficiency-through-ml-and-automation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Achieving electronic engineering efficiency through ML and automation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There is a quiet yet significant revolution underway within the massive electronics industry. Harnessing machine learning (ML) and artificial intelligence (AI), companies within the sector are building new software that saves designers, engineers, distributors and manufacturers time and resources, gradually cutting back tired and analog working methods that were previously used for creating electronic products. ML and AI are more advanced than ever. But, despite great strides, it is surprising that a technically-established vertical such as electronic engineering is not yet dominating the charge toward automation. For example, printed circuit boards (PCBs), crucial components in all electronic devices, are often still being designed using human engineers’ experiential knowledge and thought processes. Design and manufacturing times for PCBs remain archaically reliant on humans. But winds of change are sweeping through the industry; ML is beginning to refine design processes. From improving searches for parts and components, to digitizing legacy engineering documents, to assisting in design generation, ML illuminates insights about processes that would otherwise be invisible to engineers. Assisting platforms So what platforms are available to engineers to reduce PCB design process times, and what are their drawbacks and merits? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Let’s start with traditional electrical computer-aided design (ECAD) tools. These are complex software tools designed to allow engineers to perform any kind of detailed design (offering some automation). However, they are usually only tailored to manual engineering work. Examples include Altium Designer, Siemens EDA, Cadence OrCAD, AutoDesk Eagle and Zuken ECAD tools. An alternative form of assistance that is frequently used, yet is largely inefficient, is the office (or project) tool. Even today, engineers are using office tools such as Excel, Atlassian, Visio and others to manage much of their activities, such as maintaining wikis and managing projects. As they were never designed for day-to-day engineering work, these tools have multiple shortcomings, lacking the specificity necessary to save engineers time when completing electronic designs. Up-to-date information critical Database providers additionally offer software tools that give engineers insights into component prices, availability and (some) technical specifications. In the electronics industry, up-to-date information about components and semiconductors is crucial. However, this information can undercut and even negate engineers’ progress when they are designing products because databases lack details about circuits and reference designs that are absolutely necessary to make composition blueprints into a manufacturable reality. These previous three examples are all constituent platforms often used by engineers that, individually and collectively, fail to deliver on informational and organizational coherency or time efficiency. Therefore, there is a distinct necessity for automating platforms, a new class of which have recently entered the market. Cloud-based platforms, focusing on high levels of abstraction and functional design views, provide as much automation as possible and leverage the sharing and collaboration of different engineers. These platforms usually integrate smoothly with existing design tools, such as traditional ECAD. The power and dangers of data and machine learning’s significance A ubiquitous topic of the digital age, not simply in electronic engineering, concerns the evolution of ML and AI amid abundant data flows. Technological capabilities for data storage, compilation and comparison have vastly expanded in recent years, and have thankfully shrunk the time and resources that engineers spend on projects. Despite this, data handling remains a difficult proposition as developers receive more and more information. Without careful management and proper “hygiene” processes in place, more data can mean more issues for those grappling with it. New challenges arise from sheer amounts of data, and particularly bad data. For engineers, having access to billions of datasets is useful up until the point where there are information overloads, which was all too common when PCBs were designed manually, for example. Data must be channeled in ways that ML is rendered appropriate for use in electronic engineering. The future of the industry, and tech more widely, demands a focus on data quality. Data must be pointedly compacted to make it easily accessible and digestible. Users need clarity on which data points are essential and what they need to do with them. It will fall to data analysts to decipher the masses of data, with these roles then increasingly attracting higher investment from companies in the near future and beyond. More flexibility, creativity Within electronic engineering, introducing new data types also fosters more flexibility and creativity. Not only can selecting components and creating functional designs be achieved more quickly, but other design characteristics (such as sustainability) can be interwoven into final schematics. In sustainable designs, components are selected based on performance, recyclability and longevity, leading to more appropriate sourcing with new data streams becoming more prominent at the design stage. Ushered in by ML, the overall significance of healthier data management capabilities is the reduction of learning curves required for the industry’s workforce and the corollary effects of this. Ground-level tasks in PCB design previously undertaken by more proficient engineers are now being shifted to less experienced engineers using ML tools. This allows highly trained designers to focus on more specialized tasks and can aid firms with workforce shortages, with ML picking up the slack. Automation vs. human input The premium opportunity for AI and ML in electronic engineering is error removal from design and manufacturing processes. Leveraging proven settings and designs from millions of users helps to avoid mistakes and improves versatility. Users can replace components and adjust designs quickly to market conditions and disruptions. AI and ML-informed automation is — and will continue to be — revolutionary for the sector in design time efficiency. Yet despite the whirlwind advance of automating technology, human input remains paramount. Questions over deploying this technology mustn’t concern what we can automate, but what we should automate. Creativity and innovation in design are not spearheaded by AI but by skilled engineers. If we want to drive innovation in electronics, we will always need the human brain. What should be automated are the manual and tedious tasks that waste engineers’ time (which could otherwise be spent on more important areas). Full automation is not the final desired state, but it is the turbocharger firing new efficiencies in electronic engineering. Alexander Pohl is cofounder and CTO of CELUS. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,029
2,023
"We need to build better bias in AI | VentureBeat"
"https://venturebeat.com/ai/we-need-to-build-better-bias-in-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest We need to build better bias in AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At their best, AI systems extend and augment the work we do, helping us to realize our goals. At their worst, they undermine them. We’ve all heard of high-profile instances of AI bias, like Amazon’s machine learning (ML) recruitment engine that discriminated against women or the racist results from Google Vision. These cases don’t just harm individuals; they work against their creators’ original intentions. Quite rightly, these examples attracted public outcry and, as a result, shaped perceptions of AI bias into something that is categorically bad and that we need to eliminate. While most people agree on the need to build high-trust, fair AI systems, taking all bias out of AI is unrealistic. In fact, as the new wave of ML models go beyond the deterministic, they’re actively being designed with some level of subjectivity built in. Today’s most sophisticated systems are synthesizing inputs, contextualizing content and interpreting results. Rather than trying to eliminate bias entirely, organizations should seek to understand and measure subjectivity better. In support of subjectivity As ML systems get more sophisticated — and our goals for them become more ambitious — organizations overtly require them to be subjective, albeit in a manner that aligns with the project’s intent and overall objectives. We see this clearly in the field of conversational AI, for instance. Speech-to-text systems capable of transcribing a video or call are now mainstream. By comparison, the emerging wave of solutions not only report speech, but also interpret and summarize it. So, rather than a straightforward transcript, these systems work alongside humans to extend how they already work, for example, by summarizing a meeting, then creating a list of actions arising from it. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In these examples, as in many more AI use cases, the system is required to understand context and interpret what is important and what can be ignored. In other words, we’re building AI systems to act like humans, and subjectivity is an integral part of the package. The business of bias Even the technological leap that has taken us from speech-to-text to conversational intelligence in just a few years is small compared to the future potential for this branch of AI. Consider this: Meaning in conversation is, for the most part, conveyed through non-verbal cues and tone, according to Professor Albert Mehrabian in his seminal work, Silent Messages. Less than ten percent is down to the words themselves. Yet, the vast majority of conversation intelligence solutions rely heavily on interpreting text, largely ignoring (for now) the contextual cues. As these intelligence systems begin to interpret what we might call the metadata of human conversation. That is, tone, pauses, context, facial expressions and so on, bias — or intentional, guided subjectivity — is not only a requirement, it is the value proposition. Conversation intelligence is just one of many such machine learning fields. Some of the most interesting and potentially profitable applications of AI center not around faithfully reproducing what already exists, but rather interpreting it. With the first wave of AI systems some 30 years ago, bias was understandably seen as bad because they were deterministic models intended to be fast, accurate — and neutral. However, we are at a point with AI where we require subjectivity because the systems can match and indeed mimic what humans do. In short, we need to update our expectations of AI in line with how it has changed over the course of one generation. Rooting out bad bias As AI adoption increases and these models influence decision-making and processes in everyday life, the issue of accountability becomes key. When an ML flaw becomes apparent, it is easy to blame the algorithm or the dataset. Even a casual glance at the output from the ML research community highlights how dependent projects are on easily accessible ‘plug and play’ upstream libraries, protocols and datasets. However, problematic data sources are not the only potential vulnerability. Undesirable bias can just as easily creep into the way we test and measure models. ML models are, after all, built by humans. We choose the data we feed them, how we validate the initial findings and how we go on to use the results. Skewed results that reflect unwanted and unintentional biases can be mitigated to some extent by having diverse teams and a collaborative work culture in which team members freely share their ideas and inputs. Accountability in AI Building better bias starts with building more diverse AI/ML teams. Research consistently demonstrates that more diverse teams lead to increased performance and profitability, yet change has been maddeningly slow. This is particularly true in AI. While we should continue to push for culture change, this is just one aspect of the bias debate. Regulations governing the AI system bias are another important route to creating trustworthy models. Companies should expect much closer scrutiny of their AI algorithms. In the U.S., the Algorithmic Fairness Act was introduced in 2020 with the aim of protecting the interests of citizens from harm that unfair AI systems can cause. Similarly, the EU’s proposed AI regulation will ban the use of AI in certain circumstances and heavily regulate its use in “high risk” situations. And beginning in New York City in January 2023, companies will be required to perform AI audits that evaluate race and gender biases. Building AI systems we can trust When organizations look at re-evaluating an AI system, rooting out undesirable biases or building a new model, they, of course, need to think carefully about the algorithm itself and the data sets it is being fed. But they must go further to ensure that unintended consequences do not creep in at later stages, such as test and measurement, results interpretation, or, just as importantly, at the point where employees are trained in using it. As the field of AI gets increasingly regulated, companies need to be far more transparent in how they apply algorithms to their business operations. On the one hand, they will need a robust framework that acknowledges, understands and governs both implicit and explicit biases. However, they are unlikely to achieve their bias-related objectives without culture change. Not only do AI teams urgently need to become more diverse, at the same time the conversation around bias needs to expand to keep up with the emerging generation of AI systems. As AI machines are increasingly built to augment what we are capable of by contextualizing content and inferring meaning, governments, organizations and citizens alike will need to be able to measure all the biases to which our systems are subject. Surbhi Rathore is the CEO and cofounder of Symbl.ai DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,030
2,022
"IBM Research helps extend PyTorch to enable open-source cloud-native machine learning | VentureBeat"
"https://venturebeat.com/ai/ibm-research-helps-extend-pytorch-to-enable-open-source-cloud-native-machine-learning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM Research helps extend PyTorch to enable open-source cloud-native machine learning Share on Facebook Share on X Share on LinkedIn IBM logo is seen on Gae Aulenti square in Milano, Italy, on December 23 2019 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Foundation models have the potential to change the way organizations build artificial intelligence (AI) and train with machine learning (ML). A key challenge for building foundation models is that, to date, they have generally required the use of specific types of networking and infrastructure hardware to run efficiently. There has also been limited support for developers wanting to build a foundation model with an entirely open-source stack. It’s a challenge that IBM Research is looking to help solve in a number of ways. >>Don’t miss our special issue: Zero trust: The new security paradigm. << “Our question was, can we train foundation models but train it in such a way that we are doing it on commodity hardware? And make it more accessible rather than just be in the hands of a few select researchers,” Raghu Ganti, principal research staff member at IBM, told VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To that end, IBM announced today that it has developed and contributed code to the open-source PyTorch machine learning project to enable the technology to work more efficiently with commodity ethernet-based networking. IBM has also built an open-source operator that helps to optimize the deployment of PyTorch on the Red Hat OpenShift platform, which is based on the open-source Kubernetes cloud container orchestration project. To infinity and beyond: how IBM helped to extend PyTorch To date, many foundation models have been trained on hardware that support the InfiniBand networking stack that is typically only found on high-performance computing (HPC) hardware. While GPUs are the foundation of AI, in order to get multiple GPUs to connect with each other, there is a need for high-performance networking technology. Ganti explained that it is possible to train large models without InfiniBand networking but it is inefficient in a number of ways. For example, he said that with the default PyTorch technology, training an 11-billion-parameter model, over an ethernet-based network, could be done with only 20% GPU efficiency. Improving that efficiency is what IBM did alongside the PyTorch community. “This is a very complex problem and there are many knobs to tune,” Ganti said. The knobs that need to be tuned are all about making sure there is optimized GPU and network utilization. Ganti said that the goal is to keep both the network and the GPU busy at the same time to accelerate the overall training process. The code to make PyTorch optimized to work better over ethernet was merged into the PyTorch 1.13 update that became generally available on Oct. 28. “We were able to go from 20% GPU utilization all the way to 90%, and that’s like a 4.5x improvement in terms of training speeds,” Ganti said. Shifting PyTorch into high gear for faster training In addition to the code improvements in PyTorch, IBM has also worked to enable the open-source Red Hat OpenShift Kubernetes platform to support the development of foundation models. Ganti said part of what they’ve done is ensure that whatever maximum bandwidth the ethernet network can provide is exposed at the pod level in OpenShift. The use of Kubernetes to train foundation models isn’t a new idea. OpenAI , which is the organization behind some of the most widely used models, including GPT-3 and DALL-E, has publicly discussed how it uses Kubernetes. What IBM claims is new is having the technology to do so being available as open source. IBM has open-sourced a Kubernetes operator that provides the necessary configuration to help organizations scale a cluster to support large model training. With the PyTorch Foundation, more open-source innovation is now possible Until September, PyTorch had been operated as an open-source project managed by Meta. That changed on Sept. 12, when the PyTorch Foundation was announced as a new organizing body run by the Linux Foundation. Ganti said the IBM effort to contribute code into PyTorch actually began before the announcement of the new PyTorch Foundation. He explained that under Meta’s governance, IBM actually couldn’t directly commit code to the project. Instead the code had to be committed by Meta staffers who had commit access. Ganti expects that under the Linux Foundation’s guidance, PyTorch will become more collaborative and open. “I think it [PyTorch Foundation] will improve open-source collaboration,” Ganti said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,031
2,022
"PyTorch has a new home: Meta announces independent foundation | VentureBeat"
"https://venturebeat.com/ai/pytorch-has-a-new-home-meta-announces-independent-foundation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages PyTorch has a new home: Meta announces independent foundation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Meta announced today that its artificial intelligence (AI) research framework, PyTorch , has a new home. It is moving to an independent PyTorch Foundation, which will be part of the nonprofit Linux Foundation, a technology consortium with a core mission of collaborative development of open-source software. According to Aparna Ramani, VP of engineering at Meta, over the next year the focus will be on making a seamless transition from Meta to the foundation. Long-term, “The mission is really to drive adoption of AI tooling,” she told VentureBeat. “We want to foster and sustain an ecosystem of vendor-neutral projects that are open source around PyTorch, so the goal for us is to democratize state-of-the-art tools, libraries and other components that make innovations accessible to everybody.” PyTorch has become leading AI platform Since creating PyTorch six years ago, some 2,400 contributors have built more than 150,000 projects on the framework, according to Meta. As a result, PyTorch has become one of the leading platforms for AI research as well as for commercial production use — including as a technological underpinning to Amazon’s Web Services, Microsoft Azure and OpenAI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The new PyTorch Foundation board will include many of the AI leaders who’ve helped get the community where it is today, including Meta and our partners at AMD, Amazon, Google, Microsoft and Nvidia,” Mark Zuckerberg, founder and CEO of Meta, said in an emailed press comment. “I’m excited to keep building the PyTorch community and advancing AI research.” Ramani will sit on the board of the foundation as the Meta representative. She told VentureBeat the PyTorch move is a natural transition. A natural community-driven transition “This isn’t anything sudden — it’s an evolution of how we’ve always been operating PyTorch as community-driven,” Ramani said. “It’s a natural transition for us to create a foundation that is neutral and egalitarian, including many partners across the industry who can govern the future growth of PyTorch and make sure it is beneficial to everybody across the industry.” Despite being freed of direct oversight, Meta said it intends to continue using Pytorch as its primary AI research platform and will “financially support it accordingly.” Though, Zuckerberg did note that the company plans to maintain “a clear separation between the business and technical governance” of the foundation. Ramani pointed out that when PyTorch got its start as a small project by a small group of Meta researchers, nobody expected or anticipated the kind of growth it has enjoyed. “It was really the researchers who were like, let’s go solve this problem,” she said. “But as soon as we started building it, clearly PyTorch was solving something absolutely core to what the industry needed at the time — so it resonated with where AI research is going given the speed of innovation and the flexibility that has become absolutely critical. That confluence helped PyTorch really take off.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,032
2,022
"What is unstructured data in AI? | VentureBeat"
"https://venturebeat.com/ai/what-is-unstructured-data-in-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is unstructured data in AI? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Many databases are filled with information that’s carefully organized into rows and columns. The type and role for each part is pre-defined and often enforced by software that checks the data before and after it’s stored. Studying these tables for insights is relatively simple and straight-forward for data scientists. Some data sources, though, lack predictable order, but this doesn’t mean that they can’t be useful. The most common source in this vein are human-readable data texts written in human languages. Aside from the basic rules of grammar and some conventions of storytelling and journalism, there is no little obvious structure that can be used to make sense of the information and turn it into solid data. Other potential sources for unstructured information come from automatic collection, often from the telemetry from smart devices. The burgeoning world of the internet of things (IoT) is producing petabytes of information that are largely unstructured. These files may have a basic format with some predefined fields for timestamps, but the reading from the sensors frequently arrives in raw form with little or no classification or interpretation. Some artificial intelligence (AI) scientists specialize in making sense of, what is known as, unstructured data. In some sense, all data files come with a certain amount of structure or rules, and the challenge is to look beyond this structure for more in-depth insights. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How is unstructured data analyzed? The approaches are largely statistical. The algorithms look for patterns or relationships between various entries. Are the same words typically found in the same sentence or paragraph? Does some value of a sensor spike just before another one? Are some colors common in an image? Many modern algorithms impose an extra basic layer of structure on the data source, a process that’s frequently called embedding the data or building an embedding. A text, for instance, may be searched for the 10,000 most common words that aren’t common in other books or sources. An image may be broken into sections. This rough structure becomes the foundation for later statistical analysis. The creation of these embeddings is often as much an art as it is a science. Much of the work done by data scientists involves designing and testing various strategies for building the rough embedding. In many cases, domain expertise can make it possible for a human to transfer their understanding from the area to the algorithm. For instance, a doctor may decide that all blood pressure readings above a certain value should be classified as “high.” An insurance adjuster may decide that all rear-end collisions are the fault of the trailing car. These rules bring structure to the embeddings and the data to help classify it. [Related: The data that will change the world is scattered all around us ] What are the goals for unstructured AI? The goals vary from domain to domain. A common request is to find similar items in a database. Is a similar face found in this collection of photographs? Is this text plagiarized from a book? Is there another person with a similar resume? Others try to make predictions for the future to help an enterprise plan. This may mean predicting how many cars might be sold next year or how weather conditions might affect demand. This work is often much more challenging than searching for similar entries. Some work solely to classify data. Security researchers, for example, want to use AI to look for anomalies in the log files that should be investigated. Bank programmers, on the other hand, may need to flag potentially fraudulent or suspicious transactions because of rules imposed by regulators. Some classification algorithms work to codify the data simply. Additionally, machine vision algorithms, for instance, may look at faces and try to classify whether the people are happy, sad, angry, worried or any of a large set of emotions. How do some major companies work with unstructured data? The major cloud companies have expanded their cloud services to support creating data lakes from unstructured data. The providers all offer various storage solutions that are tightly coupled with their various AI services to turn the data into meaningful insights. Microsoft’s Azure AI uses a mixture of text analysis, optical character recognition, voice recognition and machine vision to make sense of an unstructured collection of files that may be texts or images. Its Cognitive Search Service will build a language-aware index of the data to guide searching and find the most relevant documents. Machine learning algorithms are integrated with traditional text searching to focus on significant terms like personal names or key phrases. Its knowledge mining algorithms are tunable by data scientists to unlock more profound studies of the data. The Cognitive Search Service is a bundled product, but the various algorithms for machine learning and search are also available independently. Google offers a wide range of tools for storing data and applying various artificial intelligence algorithms to them. Many of the tools are ideal for using unstructured data. AutoML , for example, is designed to simplify the construction of machine learning models and it’s integrated directly with a number of Google’s data storage options to enable data lakes. Vision AI can analyze images, decode text and even classify the emotion of people in the images. The Cloud Natural Language can find key passages, domain-specific words and translate words. All are sold as cloud products and billed according to usage. IBM also supports building data warehouses and data lakes with tools for both data storage and analysis that encompass the major algorithms from statistical analysis and artificial intelligence. Some of its products bundle together several of these options into task-centered tools. Teams looking for predictive analytics, for example, could use their SPSS Statistics package together with Watson AI Studio to create models for future behavior. The technologies are integrated with IBM’s storage options like the database db2, and can be either installed on premises or used in the cloud. AWS supports creating data lakes for unstructured data with a variety of products. The company’s Redshift tool, for example, can search and analyze data from a variety of sources from the S3 object storage to the more structured SQL databases. It simplifies working with complex architectures with a single interface. Amazon also offers a variety of machine learning , machine vision and artificial intelligence services that will work with all of its data storage options. These are generally available as either dedicated instances or sometimes as serverless options that are billed only when used. Oracle also offers a wide range of artificial intelligence tools. The Oracle Cloud Infrastructure (OCI) for Language is optimized for classifying unstructured text by looking for important phrases and entities. It can detect languages, begin translation and classify the sentiment of the writer. The Data Integration tool brings all the power of artificial intelligence to a code-free tool for data analysis and reporting. A collection of pre-built models can work with standard languages, while some teams may want to create their own models. [Related: How to master the data lifecycle for successful AI ] How are startups targeting unstructured data? Making some sense of unstructured data is the focus for many startups specializing in artificial intelligence, machine learning and natural language processing. Some are focused on building better algorithms with deeper insight, and others are creating better models that can be applied directly to problems. The field has a natural overlap with data science and predictive analytics. The process of finding insight in text and visual data is a natural complement to creating reports and generating predictions from more structured data. Some startups focus on providing the tools so that developers can create their own models by working with the data directly. Firms like Squirro , TeX AI , RapidMiner , Indico , Dataiku , Alteryx and H2O AI are just some companies building the foundation for conducting AI experiments with your own data. One particular focus is natural language processing. Hugging Face has created a platform where companies can share their models with others, a process that encourages the development of sophisticated, more general models with broad ability. Basis Technology is also creating tools that identify significant names and entities in unstructured text. Their product Rosette searches for relationships between the identities and creates semantic maps between them. Others are commercializing their own models and reselling them directly. OpenAI is creating a large model of human language, GPT-3 and opening up access through an API, so developers can add its features. It is ideal for work like copywriting, text classification and text summarization. The company is also building a collection of book summaries. GitHub , for instance, uses OpenAI technology in their CoPilot tool that acts like a smart assistant that helps programmers write more code faster. Cohere AI is also building their own model and opening it up via an API. Some developers are using the model to classify documents for projects like litigation support. Others are using the model to help writers find the right words and create better documents. Some are focusing the natural language models to help with specific tasks. You , for instance, is building a new search engine that offers more control to users while also relying on smarter AI to extract meaning and find the best answers. Others are packaging similar approaches as APIs for developers. ZIR and Algolia are building a pluggable search engine with semantic models that can perform better than pure keyword search. A number of the startups want to bring the power of the algorithms to particular industries or niches. They can tap into unstructured data as part of a larger focus on solving clear-cut problems for their targeted market. Viz AI , for instance, is creating an intelligent care coordinator for tracking patients in various stages of recovery. Socure hopes to improve identity verification and fraud detection for banks and other industries trying to distinguish between authentic and inauthentic behavior. Exceed AI is creating virtual sales assistants that help customers find answers and products. What AI and unstructured data can’t do The biggest limitation for the algorithms is the quality of any signal in the data. Occasionally, the data — structured or unstructured — doesn’t offer much correlation that can lead to a solid answer to a particular question. If there’s no significant connection or there’s too much random noise, there will be no signal for the algorithms to identify. This challenge is more significant for unstructured data because extra, unhelpful bits are more likely to be part of the information. While the algorithms are designed to sift through the information and exclude the unhelpful parts, there are still limits to their power. There is typically much more noise in the unstructured data. The problem is compounded by the value of finding any weak signal. If an event doesn’t happen very frequently, detecting it may not yield much profit. Even when the algorithms are successful, some unstructured data analysis does not pay off because the success is too infrequent. Often, poorly defined questions produce ambiguous results. Some approach unstructured data searching for insights, but without clearly written definitions, the answers may be equally ambiguous. A big challenge for many unstructured projects is simply defining a clear goal, so the models can be trained accurately. [Read more: Why unstructured data is the future of data management ] VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,033
2,023
"Google announces new generative AI lineup in advance of Microsoft's rumored GPT-4 debut | VentureBeat"
"https://venturebeat.com/ai/google-announces-new-generative-ai-lineup-in-advance-of-microsofts-rumored-gpt-4-debut"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google announces new generative AI lineup in advance of Microsoft’s rumored GPT-4 debut Share on Facebook Share on X Share on LinkedIn Google Cloud AI Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This morning, Google announced a laundry list of new generative AI capabilities and features for developers, through a PaLM API and in Google Cloud, as well as new integrations for users of Google Workspace, including in Gmail and Google Docs. The announcements come just a month after Google unveiled its search chatbot Bard and less than a week after Bloomberg reported that a new internal Google directive “requires generative AI to be incorporated into all of its biggest products within months.” The news also appears in advance of Microsoft’s highly-anticipated virtual ‘ Future of Work with AI ‘ event this Thursday. Thanks to comments last week by Microsoft Germany CTO Andreas Braun, that event is rumored to include the release a multimodal GPT-4, as well as a ChatGPT upgrade for Microsoft 365 applications such as Word and Outlook. [ UPDATE: OpenAI’s GPT-4 was released today in a surprise announcement] During a virtual press briefing yesterday, Google Cloud CEO Thomas Kurian said that the AI announcements “represent the culmination” of many years of work, including bringing together Transformer technology advances in reinforcement learning and advances in parallelism and orchestrating large training workloads. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These are the key announcements: PaLM API and MakerSuite for app development Google introduced a new PaLM API, which will allow developers to build on Google’s “best” LLM models. The API will come with a tool called MakerSuite, which lets developers build prototypes. According to Google, over time it will also include features for prompt engineering, synthetic data generation and custom-model tuning. A private preview is offered to select developers today, Google said, while a waitlist will be announced soon. Generative AI capabilities in Google Cloud Vertex AI is Google Cloud’s end-to-end machine learning platform that helps data science teams fast-track their ML model development and deployment from feature engineering to model training, low latency inference with enterprise class governance and monitoring. Now, developers can use Google’s foundation models in Google Cloud, initially to generate text and images. Over time, generating audio and video will become options. Google Cloud customers will be able to discover models, create and modify prompts, fine tune them with their own data and deploy applications. Google Cloud also announced a generative AI App Builder, which “connects conversational AI flows with out-of-the-box search experiences and foundation models.” Google Workspace generative AI features Google pointed out that over 3 billion users are already enjoying AI features in Gmail and Google Docs, such as Smart Compose and auto-generated summaries. The company announced that a “limited set of trusted testers” will test a new set of generative AI features in Gmail and Google Docs, including the ability to adjust the tone to be more playful or professional. An ‘open’ Google AI ecosystem Finally, Google also announced what it called the ‘most open and innovative AI ecosystem’ with a series of partnerships, programs and resources. For example, companies building foundation models, like Cohere and Anthropic, are already using Google Cloud’s infrastructure, GPU and TPU clusters to help train LLMs. Now, AI21 Labs, Midjourney and Osmo are also partnering with Google. A variety of AI solutions also launched new or expanded partnerships with Google Cloud, including Aible, Anyscale, Gretel, Labelbox, Snorkel AI and Weights & Biases. In addition, several AI application providers were announced as part of a new Built with Google Cloud AI Initiative, including Bending Spoons, Faraday, Glean, Replit and Tabnine. And a group of consulting companies, including Accenture, Deloitte, BCG and McKinsey committed to growing the Google Cloud AI and generative AI advisory and implementation services and capabilities available to customers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,034
2,023
"Google consolidates AI research labs into Google DeepMind to compete with OpenAI | VentureBeat"
"https://venturebeat.com/ai/google-consolidates-ai-research-labs-into-google-deepmind-to-compete-with-openai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google consolidates AI research labs into Google DeepMind to compete with OpenAI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google has announced the consolidation of its formerly separate AI research labs — Google Brain and DeepMind — into a new unit named Google DeepMind. The new team will spearhead groundbreaking AI products and advancements while maintaining ethical standards. The move is widely seen as a way to position the company to compete with OpenAI. “Combining all this talent into one focused team, backed by the computational resources of Google, will significantly accelerate our progress in AI,” Sundar Pichai, CEO of Google and Alphabet, said in a blog post. Google Research, the former parent division of Google Brain, will remain an independent division, focused on “fundamental advances in computer science across areas such as algorithms and theory, privacy and security, quantum computing , health, climate and sustainability, and responsible AI. ” AI research and innovation with world-class talent DeepMind has assumed a more prominent role within Alphabet as the tech giant strives to maintain its edge in the highly competitive AI industry, fending off stiff competition from rivals like Microsoft and OpenAI. According to a recent report by the Information , Google Brain software engineers are working in tandem with DeepMind experts to develop Gemini, generative AI software aimed at rivaling OpenAI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to DeepMind cofounder and CEO Demis Hassabis, the creation of Google DeepMind will bring together world-class talent in AI with the computing power, infrastructure, and resources to create the next generation of AI breakthroughs and products boldly and responsibly. “By creating Google DeepMind, I believe we can get to that future faster,” Hassabis said in a blog post. “Building ever more capable and general AI, safely and responsibly, demands that we solve some of our time’s hardest scientific and engineering challenges. For that, we need to work with greater speed, stronger collaboration and execution, and simplify the way we make decisions to focus on achieving the biggest impact.” Hassabis claims that the research accomplishments of Google Brain and DeepMind have formed the bedrock of the current AI industry, ranging from deep reinforcement learning to transformers. The newly consolidated unit will build upon this foundation to create the next generation of groundbreaking AI products and advancements that will shape the world. “Combining our talents and efforts will accelerate our progress toward a world in which AI helps solve the biggest challenges facing humanity, and I’m incredibly excited to be leading this unit and working with all of you to build it,” he added. The phenomenal teams from Google Research’s Brain and @DeepMind have made many of the seminal research advances that underpin modern AI, from Deep RL to Transformers. Now we’re joining forces as a single unit, Google DeepMind, which I’m thrilled to lead! https://t.co/n2cpn91AOl From acquisition to innovation Google’s acquisition of DeepMind for $500 million in 2014 has paved the way for a fruitful collaboration between the two entities. Over the years, they have jointly developed several groundbreaking innovations, including AlphaGo, which triumphed over professional human Go players, and AlphaFold, an exceptional tool that accurately predicts protein structures. Over the past decade, other noteworthy achievements include word2vec, WaveNet, sequence-to-sequence models, distillation, deep reinforcement learning, and distributed systems and software frameworks like TensorFlow and JAX. These cutting-edge tools have proven highly effective for expressing, training and deploying large-scale ML models. Google stated that an upcoming town hall meeting would clarify what this new unit will look like for teams and individuals, and that the composition of the new scientific board for Google DeepMind will be finalized in the coming days. The company said Google DeepMind would work closely with other Google product areas to deliver AI research and products. The unit will be helmed by Koray Kavukcuoglu, VP of research at DeepMind, and will be supervised by a new scientific board. Jeff Dean will take on the elevated role of Google’s chief scientist, reporting to Pichai. In his new capacity, Dean will serve as chief scientist to both Google Research and Google DeepMind. He has been tasked with setting the future direction of AI research at the company, as well as heading up the most critical and strategic technical projects related to AI, including a series of powerful multimodal AI models. As part of the reorganization, Eli Collins, VP of product at Google Research, will join as VP of product, while Zoubin Ghahramani, the lead of Google Brain, will serve as a member of the Google DeepMind research leadership team. This partnership underscores the commitment of Google and parent company Alphabet to furthering the pioneering research of both DeepMind and Google Brain. And the race to dominate the AI space has instantly become even more intense. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,035
2,023
"Google releases Bard, a competitor to ChatGPT, Claude and Bing Chat | VentureBeat"
"https://venturebeat.com/ai/google-releases-bard-a-competitor-to-chatgpt-claude-and-bing-chat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google releases Bard, a competitor to ChatGPT, Claude and Bing Chat Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The landscape of AI chatbots just became a bit more competitive. Google announced today that it’s opening access to Bard, the company’s experimental text-based service that lets you collaborate with generative AI. The company will slowly roll out access to the chatbot starting with the U.S. and U.K. markets and will expand to more countries and languages over time. Bard, a conversational AI chatbot comparable to ChatGPT , Claude and Bing Chat , is built on Google’s Language Model for Dialogue Applications ( LaMDA ), which was first introduced in 2021. The new chatbot aims to emulate human-like conversations by utilizing natural language processing and machine learning to generate realistic and helpful responses to user queries. >>Follow VentureBeat’s ongoing generative AI coverage<< In its surprise release, Google acknowledges that large language models (LLMs) may have their shortcomings and that users should approach Bard with caution. “For instance, because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs. And they can provide inaccurate, misleading or false information while presenting it confidently.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company also includes a warning at the bottom of the chatbot text input box, saying Bard may display inaccurate or offensive information that doesn’t represent the company’s views. Bard functions similarly to ChatGPT, Claude, Bing Chat and other AI chatbots. Users can input text, and Bard will generate a fitting, often surprisingly helpful reply. A distinctive feature of Bard is that it presents users with several drafts of its response, allowing them to choose the most suitable starting point for their request. If a completely new response is desired, users can prompt Bard again. This is reminiscent of the creative, balanced and precise options provided by Bing Chat. A noteworthy point in the announcement is that Bard serves as a direct interface to LLMs and is intended to complement Google Search, according to the company. Bard also incorporates a “Google it” button, which redirects users to a relevant Google Search on some queries. Throughout the announcement post , Google repeatedly emphasizes that “Bard is an experiment.” The company dedicates an entire section to explaining how Bard is guided by its AI Principles and how it maintains a focus on quality and safety. Interestingly, Bard’s unveiling comes almost exactly two years after the publication of “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” by former Google researchers, an event that led to the dismissal of Timnit Gebru , the former co-lead of Google’s ethical AI team. The original research paper questioned “whether enough thought has been put into the potential risks associated with developing [large language models] and strategies to mitigate these risks.” Gebru’s public dismissal became a major news story. Google stated in the announcement, “We’ll continue to improve Bard and add capabilities, including coding, more languages and multimodal experiences. And one thing is certain: We’ll learn alongside you as we go. With your feedback, Bard will keep getting better and better.” To try Bard, users can sign up at bard.google.com. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,036
2,023
"Report finds 82% of open-source software components ‘inherently risky’  | VentureBeat"
"https://venturebeat.com/security/report-finds-82-of-open-source-software-components-inherently-risky"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report finds 82% of open-source software components ‘inherently risky’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, software supply chain security management company Lineaje , released a new report titled “ What’s in Your Open-Source Software? ” that found 82% of open-source software components are “inherently risky” due to a mix of vulnerabilities, security issues, code quality or maintainability concerns. The report highlighted that while more than 70% of software in the enterprise is open source, these elements often aren’t tracked, maintained, updated or inventoried, leaving serious vulnerabilities in the software supply chain for threat actors to exploit. This comes less than a week after CISA called for software vendors to take action to implement “secure-by-design” development processes to ship code that’s secure “out of the box.” Lineaje also found significant risk among widely-used open-source solutions, analyzing the top 44 popular projects of the Apache Software Foundation and discovering that 68% of dependencies are from non-Apache Software Foundation open-source projects, many with opaque origin and update mechanisms. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “It’s imperative that organizations today understand that open-source software has risks and is tamperable, even if it is very popular or provided by an established brand,” said Javed Hasan, CEO and cofounder of Lineaje. “With more software being assembled than built, it’s become more important than ever to have formal tools to discover software DNA. Developers do not have X-ray vision to see inside a software component they include nor are most open-source selectors security experts,” Hasan said. Given that 64% of all vulnerabilities have no fixes available yet, and can’t be patched, the report echoes CISA’s call for organizations to be more proactive about managing open-source risk. It also recommends that organizations deploy supply chain management tools that have the ability to assess the dynamic inherent risk and integrity of individual dependencies and projects. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,037
2,023
"OpenAI launches long-awaited ChatGPT for Enterprise | VentureBeat"
"https://venturebeat.com/ai/openai-launches-long-awaited-chatgpt-for-enterprise-but-is-it-playing-catch-up"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI launches long-awaited ChatGPT for Enterprise — but is it playing catch-up? Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today OpenAI announced the launch of ChatGPT Enterprise, a platform that it hopes will entice large business users to invest in its growing software ecosystem. It is a long-awaited milestone that the company has been teasing since it launched ChatGPT last November — but is OpenAI now playing catch-up when it comes to bringing generative AI to the enterprise? After all, not only are many other companies targeting the same enterprise business audience with generative AI — Cohere offers bespoke Large Language Model (LLM) options for the enterprise; Anthropic partnered with Scale AI to target the enterprise; and even Microsoft Azure has its own OpenAI service — but open source players are in the mix as well. Meta’s LLaMA 2, for instance, is available for commercial use. Still, as the first massively popular LLM interface geared toward consumers with 100 million monthly users at one point , OpenAI’s ChatGPT has already entered the pop culture lexicon (it was recently mentioned disparagingly at the U.S. Republican presidential debates ). A new enterprise option may convince companies that were holding out for a product from arguably the most recognizable name in generative AI so far. Enterprise-grade generative AI The company says ChatGPT for Enterprise focuses on “enterprise-grade security,” unlimited access to GPT-4, extended context windows, advanced data analysis capabilities and customization options. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Savvy enterprise technology leaders such as CTOs and heads of IT will be concerned about the security ramifications of ChatGPT for Enterprise. However, OpenAI offers two features to assuage doubts : “Customer prompts or data are not used for training models,” the company states. It also offers “data encryption at rest (AES-256) and in transit (TLS 1.2+),” and OpenAI says ChatGPT for Enterprise “has been audited and certified for SOC 2 Type 1 compliance (Type 2 coming soon).” As for data retention, OpenAI explains that “ChatGPT Enterprise securely retains data to enable features like conversation history. You control how long your data is retained. Any deleted conversations are removed from our systems within 30 days.” In a blog post outlining the new service offerings , OpenAI introduces several features of ChatGPT Enterprise that elevate its capabilities beyond the standard version. Users gain unlimited access to the faster GPT-4, enabling seamless and efficient interactions. The increased context window of 32k tokens allows for processing longer inputs and files, enhancing versatility. Advanced data analysis, previously known as Code Interpreter , empowers both technical and non-technical teams to analyze information in seconds. Additionally, shared chat templates enable collaborative workflows — with multiple team members able to engage in a single ChatGPT session — while free credits for OpenAI’s API offer customization options for organizations seeking a fully tailored LLM solution. How much does it cost? OpenAI hasn’t specified, only sending VentureBeat the following email via a spokesperson: “It will depend on each company’s use case. Those interested should reach out to us for more information.” Customization and more features planned In its blog post, OpenAI said it “remains committed” to continuous improvement and expansion of ChatGPT Enterprise. It says long-term plans include upcoming features such as secure customization with company data integration, a self-serve ChatGPT Business offering for smaller teams and enhanced power tools for advanced data analysis and browsing. OpenAI said it aims to cater to specific roles within organizations, such as data analysts, marketers and customer support, by providing targeted solutions. Enterprise companies moving deliberately to adopt gen AI Enterprise companies are moving slowly and deliberately to adopt generative AI if they have even started at all — whether because of concerns around enterprise data security and AI “hallucinations” or a lack of the necessary technology, talent and governance to implement generative AI successfully. There’s certainly no doubt that executives want to access the power of generative AI. However, according to a recent KPMG study of U.S. executives, a solid majority (60%) of respondents said that while they expect generative AI to have an enormous long-term impact, they are still a year or two away from implementing their first solution. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,038
2,023
"Generative AI can save 5 hours of marketing hustle every week: Salesforce report | VentureBeat"
"https://venturebeat.com/ai/generative-ai-can-save-5-hours-of-marketing-hustle-every-week-salesforce-report"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Generative AI can save 5 hours of marketing hustle every week: Salesforce report Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A new survey conducted by Salesforce and YouGov has found that marketers see generative AI as a “game-changer” that can save them about five hours of work every week — that’s more than a month every year, assuming eight-hour work days. As part of their Generative AI Snapshot series, the companies polled more than 1,000 full-time marketers in the United States, UK and Australia in May. The results show most marketers are bullish on the technology, with many already using it in their workflow. >>Follow VentureBeat’s ongoing generative AI coverage<< However, even as a majority of marketers see generative AI as transformative to their role, many have also raised concerns about the quality and accuracy of generative AI outputs and the lack of skills needed to get the most out of these tools. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Nearly 75% say ‘yes’ to generative AI Out of the 1,029 full-time marketers surveyed, 51% said they are already in the process of using or experimenting with generative AI at work, while 22% said they plan to bring it into their workforce very soon. The general benefits, most of them said, would be eliminating gruntwork, allowing more time to focus on strategic work and increasing productivity. Among those using generative AI at present, the most popular use case is basic content creation and writing marketing copy, with as many as 76% handling those tasks with LLM -driven apps like ChatGPT. The next popular use cases are inspiring creative thinking (71%), analyzing market data (63%) and generating image assets (62%). More broadly, the surveyed marketers suggested that generative AI is expected to help with multiple tasks in their job, starting from creating groups for marketing campaigns and producing those campaigns and journey plans, and progressing to personalizing content, conducting copy testing, and building and optimizing SEO strategy. “Generative AI has the potential to transform how marketers connect with their customers by powering more personalized, automated and effective campaigns — quickly and at scale,” Stephen Hammond, EVP and GM for marketing cloud at Salesforce, said in a statement. It can also help with analyzing campaign performance data, 58% of the respondents said. Yet concerns remain While the general consensus remained that generative AI is or will be transforming marketing hustles, the survey respondents also highlighted certain roadblocks that can easily hinder the technology’s adoption in their field. The biggest challenge, the marketers said, is the accuracy and quality of generative AI tools’ output, with 73% noting that the technology lacks human-specific creativity and contextual knowledge and 66% worrying that its results can be biased. Beyond this, many suggested they feel underprepared to make the most of the technology in their workflows. Forty-three percent of respondents said they don’t know how to get the most value out of generative AI, while more than a third said they do not know how to use the technology safely (39%) and effectively (34%). This highlights a major gap that can often lead to data leak blunders, much like what happened at Samsung. To address these problems and successfully use generative AI in their roles, the marketers called for three key improvements: human oversight (66%), the use of trusted customer data for the models (63%), and adequate training (54%) to leverage the said models in the workflow. The human oversight and trusted data will ensure on-point, use case-specific outputs, while training will ensure marketers are getting maximum value from the tools without compromising on security or efficiency. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,039
2,023
"OpenAI product leader denies claims GPT-4 has gotten 'lazier and dumber' | VentureBeat"
"https://venturebeat.com/ai/no-openai-did-not-make-gpt-4-dumber-says-product-leader"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI product leader denies claims GPT-4 has gotten ‘lazier and dumber’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI has a lot on its plate. The Washington Post reported yesterday that the Federal Trade Commission is investigating the generative AI leader for possible violations of consumer protection law. And on Monday, comedian and author Sarah Silverman sued OpenAI and Meta for copyright infringement of her humorous memoir, The Bedwetter: Stories of Courage, Redemption, and Pee , published in 2010. But while OpenAI lawsuits and investigations may be flying fast and furiously — and product releases such as Code Interpreter for ChatGPT Plus users have continued apace — it was a report Wednesday by Business Insider that OpenAI’s GPT-4 model, which powers ChatGPT, had become “lazier and dumber” due to a “radical redesign” that prompted a response from the company’s product team. Community members on OpenAI’s developer forum had been discussing what they perceived as a decrease in GPT-4 quality — losses in reasoning and logic capabilities, API denials and poorer results overall. They speculated OpenAI might have modified the learning algorithm, changed the training data or modified the model’s infrastructure. The complaints and reports of degraded service followed similar posts on the grassroots Reddit communities or subreddits of r/OpenAI and r/ChatGPT for the last several months. One commenter, self-identified as a paying OpenAI subscriber, said that “it went from being a great assistant sous-chef to dishwasher.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In response, Peter Welinder, VP of product at OpenAI, tweeted that not only had the company not made GPT-4 dumber, but each new version was smarter than the one before. His current hypothesis, he said, was that “when you use it more heavily, you start noticing issues you didn’t see before.” He continued: “If you have examples where you believe it’s regressed, please reply to this thread and we’ll investigate.” No, we haven't made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one. Current hypothesis: When you use it more heavily, you start noticing issues you didn't see before. While some supported Welinder’s comments, others disagreed, with one respondent calling GPT-4 “plain worse.” And certainly part of the problem is that GPT-4 remains a “black box,” so developers don’t know whether changes are being made to the model. That has been a sticking point since the highly anticipated model’s release in March. At that time, there was a raft of online criticism about what accompanied the announcement: a 98-page technical report about the “development of GPT-4.” Many said the report was notable mostly for what it did not include. In a section called Scope and Limitations of this Technical Report, it says: “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,040
2,023
"How companies can practice ethical AI | VentureBeat"
"https://venturebeat.com/ai/how-companies-can-practice-ethical-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How companies can practice ethical AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence (AI) is an ever-growing technology. More than nine out of 10 of the nation’s leading companies have ongoing investments in AI-enabled products and services. As the popularity of this advanced technology grows and more businesses adopt it, the responsible use of AI — often referred to as “ethical AI” — is becoming an important factor for businesses and their customers. What is ethical AI? AI poses a number of risks to individuals and businesses. At an individual level, this advanced technology can pose endanger an individual’s safety, security, reputation, liberty and equality; it can also discriminate against specific groups of individuals. At a higher level, it can pose national security threats, such as political instability, economic disparity and military conflict. At the corporate level, it can pose financial, operational, reputational and compliance risks. Ethical AI can protect individuals and organizations from threats like these and many others that may result from misuse. As an example, TSA scanners at airports were designed to provide us all with safer air travel and are able to recognize objects that normal metal detectors could miss. Then we learned that a few “bad actors” were using this technology and sharing silhouetted nude pictures of passengers. This has since been patched and fixed, but nonetheless, it’s a good example of how misuse can break people’s trust. >>Don’t miss our special issue: The CIO agenda: The 2023 roadmap for IT leaders. << VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When such misuse of AI-enabled technology occurs, companies with a responsible AI policy and/or team will be better equipped to mitigate the problem. Implementing an ethical AI policy A responsible AI policy can be a great first step to ensure your business is protected in case of misuse. Before implementing a policy of this kind, employers should conduct an AI risk assessment to determine the following: Where is AI being used throughout the company? Who is using the technology? What types of risks may result from this AI use? When might risks arise? For example, does your business use AI in a warehouse that third-party partners have access to during the holiday season? How can my business prevent and/or respond to misuse? Once employers have taken a comprehensive look at AI use throughout their company, they can start to develop a policy that will protect their company as a whole, including employees, customers and partners. To reduce associated risks, companies should factor in certain key considerations. They should ensure that AI systems are designed to enhance cognitive, social and cultural skills; verify that the systems are equitable; incorporate transparency throughout all parts of development; and hold any partners accountable. In addition, companies should consider the following three key components of an effective responsible AI policy: Lawful AI : AI systems do not operate in a lawless world. A number of legally binding rules at the national and international levels already apply or are relevant to the development, deployment and use of these systems today. Businesses should ensure the AI-enabled technologies they use abide by any local, national or international laws in their region. Ethical AI : For responsible use, alignment with ethical norms is necessary. Four ethical principles, rooted in fundamental rights, must be respected to ensure that AI systems are developed, deployed and used responsibly: respect for human autonomy, prevention of harm, fairness and explicability. Robust AI : AI systems should perform in a safe, secure and reliable manner, and safeguards should be implemented to prevent any unintended adverse impacts. Therefore, the systems need to be robust, both from a technical perspective (ensuring the system’s technical robustness as appropriate in a given context, such as the application domain or life cycle phase), and from a social perspective (in consideration of the context and environment in which the system operates). It is important to note that different businesses may require different policies based on the AI-enabled technologies they use. However, these guidelines can help from a broader point of view. Build a responsible AI team Once a policy is in place and employees, partners and stakeholders have been notified, it is vital to ensure a business has a team in place to enforce it and hold misusers accountable for misuse. The team can be customized depending on the business’s needs, but here is a general example of a robust team for companies that use AI-enabled technology: Chief ethics officer : Often called a chief compliance officer, this role is responsible for determining what data should be collected and how it should be used; overseeing AI misuse throughout the company; determining potential disciplinary action in response to misuse; and ensuring teams are training their employees on the policy. Responsible AI committee : This role, performed by an independent person/team, executes risk management by assessing an AI-enabled technology’s performance with different datasets, as well as the legal framework and ethical implications. After a reviewer approves the technology, the solution can be implemented or deployed to customers. This committee can include departments for ethics, compliance, data protection, legal, innovation, technology, and information security. Procurement department : This role ensures that the policy is being upheld by other teams/departments as they acquire new AI-enabled technologies. Ultimately, an effective responsible AI team can help ensure your business holds accountable anyone who misuses AI throughout the organization. Disciplinary actions can range from HR intervention to suspension. For partners, it may be necessary to cease using their products immediately upon discoering any misuse. As employers continue to adopt new AI-enabled technologies, they should strongly consider implementing a responsible AI policy and team to efficiently mitigate misuse. By utilizing the framework above, you can protect your employees, partners and stakeholders. Mike Dunn is CTO at Prosegur Security. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,041
2,023
"Generative AI and Web3: Hyped nonsense or a match made in tech heaven | VentureBeat"
"https://venturebeat.com/data-infrastructure/generative-ai-and-web3-hyped-nonsense-or-a-match-made-in-tech-heaven"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Generative AI and Web3: Hyped nonsense or a match made in tech heaven Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Did I write this, or was it ChatGPT? It’s hard to tell, isn’t it? For the sake of my editors, I will follow that quickly with: I wrote this article (I swear). But the point is that it’s worth exploring generative artificial intelligence’s limitations and areas of utility for developers and users. Both are revealing. The same is true for Web3 and blockchain. While we’re already seeing the practical applications of Web3 and generative AI play out in tech platforms, online interactions, scripts, games and social media apps, we’re also seeing a replay of the responsible AI and blockchain 1.0 hype cycles of the mid-2010s. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We need a set of principles or ethics to guide innovation.” “We need more regulation.” “We need less regulation.” “There are bad actors poisoning the well for the rest of us.” “We need heroes to save us from AI and/or blockchain.” “Technology is too sentient.” “Technology is too limited.” “There is no enterprise-level application.” “There are countless enterprise-level applications.” If you exclusively read the headlines, you will come out the other side with the conclusion that the combo of generative AI and blockchain will either save the world or destroy it. All over again We’ve seen this play (and every act and intermission) before with the hype cycles of both responsible AI and blockchain. The only difference this time is that the articles we’re reading about ChatGPT’s implications may, in fact, have been written by ChatGPT. And the term blockchain has a bit more heft behind it thanks to investment from Web2 giants like Google Cloud, Mastercard and Starbucks. That said, it’s notable that OpenAI’s leadership recently called for an international regulatory body akin to the International Atomic Energy Agency (IAEA) to regulate and, when necessary, rein in AI innovation. The proactive move illuminates an awareness of both AI’s massive potential and potentially society-crumbling pitfalls. It also conveys that the technology itself is still in test mode. The other significant subtext: Public sector regulation at the federal and sub-federal levels commonly limits innovation. As with Web3, and whether or not regulatory action takes place, responsibility needs to be at the core of generative AI innovation and adoption. As the technology evolves rapidly, it’s important for vendors and platforms to assess every potential use case to ensure responsible experimentation and adoption. And, as OpenAI’s Sam Altman and Google’s Sundar Pichai notably point out , working with the public sector to evolve regulation is a significant part of that equation. It’s also important to surface limitations, transparently report on them, and provide guardrails if or when issues become apparent. While AI and blockchain have both been around for decades, the impact of AI, in particular, is now visible with ChatGPT, Bard and the entire field of generative AI players. Together with Web3’s decentralized power, we’re about to witness an explosion of practical applications that build on progress automating interactions and advancing Web3 in more visible ways. From a user-centric perspective (and whether we know it or not), generative AI and blockchain are both already transforming how people interact in the real world and online. Solana recently made it official with a ChatGPT integration. And exchange Bitget backed away from theirs. Promising or puzzling, every signal indicates that it remains to be seen where the technologies best intersect in the name of user experience and user-centric innovation. From where I sit as the head of a layer1 blockchain built for scale and interoperability, the question becomes: How should AI and blockchain join forces in pursuit of Web3’s own ChatGPT moment of mainstream adoption? Tools like ChatGPT and Bard will accelerate the next major waves of innovation on Web2 and Web3. The convergence of generative AI and Web3 will be like the pairing of peanut butter and jelly on fresh bread — but, you know, with code, infrastructure, and asset portability. And, as hype is replaced with practical applications and constant upgrades, persistent questions about whether these technologies will take hold in the mainstream will be toast. So, what does all this mean for enterprise leaders? Enterprise leaders should view generative AI as a tool worth exploring, testing, and after doing both, integrating. Specifically, they should focus efforts on exploring how the “generative” element can improve work outcomes internally with teams and externally with customers or partners. And they should continuously map out its enterprise-wide potential and limitations. It’s time to begin to map out and document where not to use generative AI, which is equally important in my book. Don’t rely on the technology for anything where you need to apply facts and hard data to outputs for community members, partners, teams or investors, and don’t rely on it for protocol upgrades, software engineering, coding sprints or international business operations. On a practical level, enterprise leaders should consider incorporating generative AI into administrative workflows to keep their company’s day-to-day workflows moving faster and more efficiently. Explore its seemingly universal utility to kick off text- or code-heavy projects across engineering, marketing, business and executive functions. And since this tech changes by the day, enterprise leaders should look at every possible new use case to decide whether to responsibly experiment with it en route to adoption, which also applies to work in Web3. Mo Shaikh is cofounder and CEO of Aptos Labs. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,042
2,022
"5 top ailments affecting the healthcare data security infrastructure | VentureBeat"
"https://venturebeat.com/data-infrastructure/5-top-ailments-affecting-the-healthcare-data-security-infrastructure"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 5 top ailments affecting the healthcare data security infrastructure Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. While hospitals and healthcare systems have been one of the most popular targets of hackers and cybercriminals in recent years, that picture is starting to improve at many organizations. Hospitals are generally getting better at protecting data. Many are updating their health information technology infrastructure and implementing stronger data security measures. These include encryption of all healthcare data stored, two-factor login authentication, and workforce security training programs. But that road to recovery still eludes some healthcare systems. To get a better idea of how data is being protected in the healthcare system, VentureBeat spoke to Victor Low, senior director of IT at Q-Centrix , a company specializing in healthcare data management. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Common challenges impacting healthcare data infrastructure Unfortunately, many hospitals and healthcare centers suffer from symtoms of inadequate data infrastructure, staffing or strategy, Low said. “These obstacles impede the flow of data sharing, causing it to become much more complex and complicated. As a result, most healthcare systems choose to lock down the data for protection, while overlooking the need for data integration and sharing,” he explained. There are five common challenges that hospitals and healthcare systems face while managing their data and data infrastructure, Low said. They are: 1. The lack of skilled resources and role-based training “This includes staff who are properly trained in clinical data collection and management technology. Without these resources, data can be more susceptible to attack and subsequent misuse,” Low said. “Hospital and healthcare systems can make greater investments into these areas to address these issues.” 2. Dated technology, security and documentation “No MFA (multifactor authentication), SSO (single sign on), no encryption. Without advanced and modern security protections, data is more likely to be compromised in an attack,” Low said. 3. Complex (and confusing) technology architecture Low pointed out that healthcare systems are especially prone to silos and orphan systems. “Healthcare systems have gone through multiple mergers and consolidation over the past few years. During the course of integration, each healthcare system brings on their existing processes, technologies and personnel,” he explained. “It takes huge effort and resources to transition from one system to another and, in the interim, existing systems are kept in place as a stopgap. Oftentimes, these stopgaps stay on due to deprioritization or dependencies and, over time, it builds on top of each other and becomes overlooked.” 4. Multiple oversight and regulatory environment/partners involved “Health systems have their own internal security team and outsource some of the security assessment and/or security work to third parties for best practice. However, these can sometimes result in miscommunication, an overlap of responsibilities and long turnaround,” Low notes. A solution, he said, is “the forming of a single security and compliance committee, composed of key stakeholders from different areas who get together frequently to create a framework and roadmap. This would help uncover underlying risks and inefficiencies in security and compliance and provide a guiding star to existing and new processes and technologies.” 5. It’s going to take more than just a shot to cure healthcare’s data security woes Fixing the data security infrastructure for healthcare is going to take a long-term investment in people and technology. “Summing from the above points, any technology improvement/implementation would take multiple-fold of effort, time and resources for healthcare systems to remediate, on top of being a low-margin business,” Low said. He said to streamline the process, “creating a roadmap and framework for technology implementation and lifecycle” would be a good start. Another good practice to enforce across a healthcare organization is tracking and monitoring all vendors, holding them to the same standards and process companywide. Low explained this would have a threefold effect, in that it would “significantly cut down the vetting and assessment process for the security and technology team, [take] the guessing work out of the process for different vendors and [reduce] overhead.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,043
2,023
"How businesses can break through the ChatGPT hype with 'workable AI' | VentureBeat"
"https://venturebeat.com/ai/how-businesses-can-break-through-the-chatgpt-hype-with-workable-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How businesses can break through the ChatGPT hype with ‘workable AI’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. New products like ChatGPT have captivated the public, but what will the actual money-making applications be? Will they offer sporadic business success stories lost in a sea of noise, or are we at the start of a true paradigm shift? What will it take to develop AI systems that are actually workable? To chart AI’s future, we can draw valuable lessons from the preceding step-change advance in technology: the Big Data era. 2003–2020: The Big Data Era The rapid adoption and commercialization of the internet in the late 1990s and early 2000s built and lost fortunes, laid the foundations of corporate empires and fueled exponential growth in web traffic. This traffic generated logs, which turned out to be an immensely useful record of online actions. We quickly learned that logs help us understand why software breaks and which combination of behaviors leads to desirable actions, like purchasing a product. As log files grew exponentially with the rise of the internet, most of us sensed we were onto something enormously valuable, and the hype machine turned up to 11. But it remained to be seen whether we could actually analyze that data and turn it into sustainable value, especially when the data was spread across many different ecosystems. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google’s big data success story is worth revisiting as a symbol of how data turned it into a trillion-dollar company that transformed the market forever. Google’s search results were consistently excellent and built trust, but the company couldn’t have kept providing search at scale — or all the additional products we rely on Google for today — until Adwords enabled monetization. Now, we all expect to find exactly what we need in seconds, as well as perfect turn-by-turn directions, collaborative documents and cloud-based storage. Countless fortunes have been built on Google’s ability to turn data into compelling products, and many other titans, from a rebooted IBM to the new goliath of Snowflake , have built successful empires by helping organizations capture, manage and optimize data. What was just confusing babble at first ultimately delivered tremendous financial returns. It’s this very path that AI must follow. 2017–2034: The AI Era Internet users have produced massive volumes of text written in natural language, like English or Chinese, available as websites, PDFs, blogs and more. Thanks to big data, storing and analyzing this text is easy — enabling researchers to develop software that can read all that text and teach itself to write. Fast-forward to ChatGPT arriving in late 2022 and parents calling their kids asking if the machines had finally come alive. It is a watershed moment in the field of AI, in the history of technology, and maybe in the history of humanity. Today’s AI hype levels are right where we were with big data. The key question the industry must answer is: How can AI deliver the sustainable business outcomes essential to bring this step-change forward for good? Workable AI: Let’s put AI to work To find viable, valuable long-term applications, AI platforms must embrace three essential elements. The generative AI models themselves The interfaces and business applications that will allow users to interact with the models, which could be a standalone product or a generative AI-augmented back office process A system to ensure trust in the models, including the ability to continually and cost-effectively monitor a model’s performance and to teach the model so that it may improve its responses Just as Google united these elements to create workable big data, the AI success stories must do the same to create what I call Workable AI. Let’s look at each of these elements and where we are today: Generative AI models Generative AI is unique in its wildness, bringing challenges of unexpected behavior and requiring continual teaching to improve. We can’t fix bugs as we would with traditional, procedural software. These models are software that has been built by other software, composed of hundreds of billions of equations that interact in ways we cannot understand. We just don’t know which weights between which neurons need to be set to which values to prevent a chatbot from telling a journalist to divorce his wife. The only way that these models can improve is through feedback and more opportunities to learn what good behavior looks like. Constant vigilance around data quality and algorithm performance is essential to avoid devastating hallucinations that can alienate potential customers from using models in high-stakes environments where real dollars are spent. Building trust Governance, transparency and explainability, enforced through real regulation, are essential to give companies confidence that they can understand what AI is doing when missteps inevitably occur so that they can limit the damage and work to improve the AI. There is much to applaud in initial moves by industry leaders to create thoughtful guardrails with real teeth, and I urge rapid adoption of smart regulation. In addition, I would require that any media (text, audio, image, video) generated by AI be clearly labeled as “Made with AI” when used in a commercial or political context. Much as with nutrition labels or movie ratings, consumers deserve to know what they’re getting into — and I believe many will be pleasantly surprised by the quality of AI-generated products. Killer apps Hundreds of companies have sprouted up in a matter of months providing applications of generative AI , from creating marketing collateral to crafting new music to creating new medicines. The simple prompt of ChatGPT could potentially surpass the search engine of the Big Data Era — but many more applications could be just as powerful and profitable in different verticals and applications. We’re already seeing massive improvements in coding efficiency using ChatGPT. What else will follow? Experimenting to find AI applications that provide a step-change in the user experience and business performance will be essential to creating Workable AI. The companies that will build their fortune on this new class of technologies will break through these innovation barriers. They’ll solve the challenge of continuously and cost-effectively building trust in the AI while developing killer apps paired with sound monetization built on powerful underlying models. Big data went through the same noise and nonsense cycle. Similarly, it will likely take a few generations and missteps, but by focusing on the tenets of Workable AI, this new discipline will quickly evolve to create a step-change platform that’s just as transformative as experts expect. Florian Douetteau is CEO of Dataiku. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,044
2,023
"OpenAI rival Cohere raises a fresh $270 million to bring generative AI to the enterprise | VentureBeat"
"https://venturebeat.com/ai/openai-rival-cohere-raises-a-fresh-270-million-to-bring-generative-ai-to-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI rival Cohere raises a fresh $270 million to bring generative AI to the enterprise Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI startup Cohere is most certainly no longer flying under the radar. The Toronto-based company just announced $270 million in a series C round of funding led by Inovia with participation from Nvidia, Oracle, Salesforce Ventures and others — valuing the company at over $2 billion. “We are at the beginning of a new era driven by accelerated computing and generative AI ,” said Jensen Huang, founder and CEO of Nvidia, in a statement. “The team at Cohere has made foundational contributions to generative AI. Their service will help enterprises around the world harness these capabilities to automate and accelerate.” As VentureBeat reported in February, Aidan Gomez, cofounder and CEO of Cohere AI , had admitted that the company, which offers developers and businesses access to natural language processing (NLP) powered by large language models (LLMs) , is “crazy under the radar.” But given the quality of the company’s foundation models, which many say are competitive with the best from Google, OpenAI and others, he said that should not be the case. Cohere, founded in 2019 by Gomez, Ivan Zhang and Nick Frosst, is just one company enjoying the investment frenzy into generative AI since Microsoft invested a fresh $10 billion into OpenAI in January. Just a couple of weeks ago, for example, Anthropic , a San Francisco-based AI startup and another rival to OpenAI, announced that it has raised $450 million in series C funding. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! And back in October 2022, the Wall Street Journal reported that Cohere had reportedly been in talks with both Google and Nvidia about a possible investment. Cohere emphasized in a press release that its enterprise AI suite is cloud-agnostic and built to be deployed inside a customer’s existing cloud environment or virtual private cloud (VPC), or on-site. Back in 2017, Gomez and a group of fellow Google Brain colleagues, who had co-authored the original Transformer paper “Attention Is All You Need,” were frustrated by the huge adoption of transformers within Google, while there was not a lot of adoption outside of it. As a result, several Transformer co-authors famously decided to leave Google and found their own startups (for example, Noam Shazeer founded Character AI , and Niki Parmar and Ashish Vaswani founded Adept AI ) — including Gomez. “We just decided we needed to do our own thing,” Gomez told VentureBeat. “We felt there were some fundamental barriers keeping enterprises and young developers and startup founders from [adopting NLP] and there’s got to be a way to bring those barriers down.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,045
2,023
"OpenAI turns ChatGPT into a platform overnight with addition of plugins | VentureBeat"
"https://venturebeat.com/ai/openai-turns-chatgpt-into-a-platform-overnight-with-addition-of-plugins"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI turns ChatGPT into a platform overnight with addition of plugins Share on Facebook Share on X Share on LinkedIn used 3/23/2023 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI today announced its support of new third-party plugins for ChatGPT , and it already has Twitter buzzing about the company’s potential platform play. In a blog post , the company stated that the plugins are “tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services.” A sign of OpenAI’s accelerating dominance The announcement was quickly received by the public as a signal of OpenAI ‘s ambitions to further its dominance by turning ChatGPT into a developer platform. we are starting our rollout of ChatGPT plugins. you can install plugins to help with a wide variety of tasks. we are excited to see what developers create! https://t.co/NQ684Yp2LK pic.twitter.com/m7b6vJrj5D “OpenAI is seeing ChatGPT as a platform play,” tweeted Marco Mascorro, cofounder of Fellow AI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! And @gregmushen tweeted: “I think the introduction of plugins to ChatGPT is a threat to the App Store. It creates a new platform with new monetization methods.” In sharing the announcement, OpenAI CEO Sam Altman tweeted : “We are starting our rollout of ChatGPT plugins. you can install plugins to help with a wide variety of tasks. we are excited to see what developers create!” OpenAI, he said, is offering a web browsing plugin and a code execution plugin. He added that the company is open-sourcing the code for a retrieval plugin. The plugins, he said, are “very experimental still,” but maintained that “we think there’s something great in this direction; it’s been a heavily requested feature.” ChatGPT plugins: Major milestone in development of AI chat OpenAI announced that plugin developers who have been invited off the company’s waitlist can use its documentation to build a plugin for ChatGPT. The first plugins have already been created by companies including Expedia, Instacart, Kayak, OpenTable and Zapier. According to Expedia, their new plugin simplifies trip planning for ChatGPT users. “Until now, ChatGPT could identify what to do and where to stay, but it couldn’t help travelers shop and book,” said a press representative in an email. Now, once a traveler enables the Expedia plugin, they can bring a trip itinerary created through a conversation with ChatGPT “to life” with information powered by Expedia’s travel data including real-time availability and pricing of flights, hotels, vacation rentals, activities and car rentals. When ready to book, they’ll be sent to Expedia, where they can log in to see options personalized to what they prefer, as well as member discounts, loyalty rewards and more. The update represents a major milestone in the development of AI chat as a platform for accessing and interacting with the internet. ChatGPT is not only providing a service, it is creating an ecosystem where developers can create and distribute their own plugins for the benefit of users. This is similar to how Apple’s App Store revolutionized the mobile industry by allowing third-party apps to flourish on its devices. ChatGPT’s plugin feature could potentially open up new possibilities and markets for AI chat in the future. OpenAI said they would begin extending plugin alpha access to users and developers from its waitlist and plan to roll out larger-scale access “over time.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,046
2,022
"How embracing automation could change the future of work | VentureBeat"
"https://venturebeat.com/automation/how-embracing-automation-could-change-the-future-of-work"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How embracing automation could change the future of work Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The industrial advancements of the past — from machines enabling mass production to the introduction of computers and automation — have all led to the tipping point we are navigating today. Organizations of all sizes and across sectors are increasingly adopting advanced digital technologies. This will transform the way we work as drastically as past achievements did a century ago. Leaders need to ensure their organizations can seamlessly navigate these shifts and have the necessary resources to drive the promise of digitization forward. While businesses had some time to prepare for digital transformation , the pandemic propelled those efforts. In fact, nearly 97% of enterprise decision-makers said they believe the pandemic sped up their company’s digital transformation efforts. Yet, there are still leaders who are paralyzed by indecision or simply do not know how to evolve. By making small changes to the way their organizations work, they will be able to shape brighter, more fulfilling futures for employees. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Shifting the conversation Another 4.2 million Americans quit their jobs this past spring, according to the U.S. Department of Labor — signaling to leaders everywhere that we need to do better. We need to make the workplace more flexible — and in the industrial sector — we need to remove mundane, repetitive tasks to allow workers to focus on innovation. One factor that has some leaders holding back their workforce, whether knowingly or unknowingly, is fear. That’s because the conversation around intelligent solutions, advanced automation and digital technologies like artificial intelligence (AI) and machine learning (ML) has veered off course for far too long. Nearly 61% of global respondents to a PwC survey said they are worried that automation is putting people’s jobs at risk. Meanwhile, many industrial workers today are stuck handling manual and repetitive tasks that do not benefit them or further their knowledge or expertise in their chosen field. The fear that is embedded in the conversation about digital evolution needs to change. More than most, Richard Gerver, a well-known author, educator and speaker, is familiar with this difficulty. “We now need a workforce that are more entrepreneurial, that are more dynamic, more creative, more innovative, more collaborative,” Gerver said. “…but the challenge, the problem in a way, is that we’re still educating people en masse to fill jobs in those factories and in those offices which are largely technical and about routine cognition. And so, we’re starting to see the early stages of a major clash between educated people and the jobs that are available for them…” Yes, digital technologies will disrupt and transform the types of jobs available, and there may be a decrease in demand for the more mundane jobs. However, there will be a greater increase in more strategic, future-forward positions including data scientists, AI and ML specialists and robotics engineers. The key here for leaders is to upskill or teach their existing and new employees to be ready for these emerging positions. Leading companies are already doing so by allocating resources and creating training programs, and workers themselves are more than willing to participate. A Boston Consulting Group survey found that employees’ willingness to retrain exceeds 70% in roles that have experienced pandemic-related disruption and are most at risk of being replaced by technology. Alternatively, those who are not taking steps to realize efficiencies within operations or considering how to best prepare their workers will undoubtedly feel the heat. People and technology should drive transformation Once we change the conversations we are having about automation and other advanced technologies — and when there is widespread knowledge of who benefits — then it is time to act. This stage of digitization — pivoting from concept to creation — can feel overwhelming, and the path forward can look murky, but it is important to start somewhere. The appetite to better equip the workforce to navigate the future is already in full force. According to an Automation Anywhere report , almost all (95%) of respondents now consider intelligent automation a key component of their transformation strategies. Organizations are extensively incorporating automation into their efforts for digital transformation by centralizing automation planning. So, where do you start if you haven’t already? Leaders should use two main pillars, people and technology, as their guiding functions to prepare their organizations for the next wave of transformation: People: First and foremost, people should be the organization’s top priority and number one investment. Take the time to listen to and better understand the workforce. What are their actual challenges versus assumed challenges? What are their goals and ambitions, and do they have the resources to get them there? Then, use that information to drive change for the organization. Create the upskilling and reskilling programs needed to provide more opportunities for employees. Take an active role in ensuring their future success and happiness, which can ultimately help to retain top talent. Listen, advocate and act, repeatedly. Technology: Gain a better understanding of the solutions, systems and processes that force the workforce to do repetitive and soul-crushing tasks. Then, say goodbye to those clunky, slow platforms and invest in technologies like automation and digital workers that will make a positive impact for employees and the business. This has been the biggest sticking point for many organizations, but the pace at which technology is advancing, coupled with an increase in competitiveness across sectors, is serving as a forcing function to charge ahead. According to an EY index , nearly three-quarters of executives (72%) acknowledge that they “must radically transform their operations during the next two years to compete effectively in their industry,” up from 62% in 2020. Now is the time to take it a step further. It is time to move beyond acknowledging the sweeping impact that advanced technologies will have on businesses and employees, and create a culture that embraces the change and allows everyone to experience the benefits of the digital evolution. Mihir Shukla is the CEO and cofounder of Automation Anywhere. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,047
2,023
"Data center modernization: The heavy -- and rising cost -- of doing nothing | VentureBeat"
"https://venturebeat.com/data-infrastructure/data-center-modernization-the-heavy-and-rising-cost-of-doing-nothing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Data center modernization: The heavy — and rising cost — of doing nothing Share on Facebook Share on X Share on LinkedIn Presented by AMD Competitive advantage today rests on an enterprise’s ability to deliver exceptional time-to-results for business-critical applications or consumer-facing services, ever faster and ever more reliably. In the background — or in the data center, to be more precise — this digital transformation and the accompanying explosion in company data has thrust CIOs and IT leaders into an era of relentless scaling. Even as organizations grapple with inflation and economic uncertainty, they are facing calls to provide a high-performance foundational compute infrastructure across the enterprise to develop new delivery models and handle new use cases. These include: Streamlining operations and reducing costs while enhancing sustainability (by lowering energy expenses and emissions). Enabling permanent remote and hybrid working, often with virtualized desktop infrastructure (VDI). Supporting AI, machine learning and database analytics, plus new deployment models — notably containerization and cloud native. Mining data effectively to deliver insights that drive revenue growth and increase customer “stickiness.” Responding and adapting quickly and flexibly to rapid business changes and evolution to enable ongoing transformation. New demands on the data center, economic uncertainty, inflation running well above recent levels (principally due to much increased energy costs) — these challenges are outside the control of CIOs and infrastructure decision-makers. And, with CAPEX and OPEX budgets under strain, a tempting option in these circumstances might be to hold fire and postpone investment in data center infrastructure — even if that investment would deliver higher performance within a shrinking power, cost and space envelope. This can be an especially seductive argument when the data center’s servers are already paid for, which is a common fact given that the average age of these servers is 3–5 years. Surely, the argument goes, it’s better to wait a while, reducing CAPEX and avoiding the effort and cost of upgrades. The older the infrastructure, the greater the cost The trouble is those aging servers are not cost-free. The performance of older equipment declines over time, while the time, cost and space needed to keep it running rises. Older servers are more likely to crash, causing unplanned downtime and higher maintenance costs. They are far more vulnerable to sophisticated, targeted attacks. Data centers that push the limits of existing power, cooling and space face increased power costs. Over time they will become more unable to keep pace with the increased, changing business demands. So, eking out a few more years from aging infrastructure may save on CAPEX, but at the expense of rising OPEX. It also risks loss of revenue and competitive advantage. The fact is, when it comes to data centers, kicking IT infrastructure refreshment down the road is not an option. To serve modern customers, the modern enterprise needs modernized data centers that can support simpler, software-defined environments that improve operations, agility, flexibility and scalability with a lower TCO. The three pillars of data center modernization CIOs looking to modernize their data centers need to focus on three main pillars. The first is the requirement to harness all the data in the enterprise to deliver real-time, actionable insights the business thrives on. This calls for systems with the highest bandwidth, lowest latency and fastest throughput. Second is driving savings through infrastructure consolidation. And third is reduced energy consumption and a smaller carbon footprint to meet the sustainability targets that are increasingly a component of corporate stewardship. The good news for CIOs who still have an eye on their CAPEX and OPEX budgets is that all three objectives are achievable using the latest-generation CPUs. These feature huge improvements in the key performance areas of core density and per-core performance, which in turn deliver record-breaking reductions in the number of servers, power consumed and savings in CAPEX and OPEX. As an example, take the performance of AMD’s 4th Gen EPYC processors on each of those pillars of data center modernization. Optimized for different workloads and segments — cloud, performance enterprise and mainstream enterprise — 4th Gen EPYC processors significantly outperform competitive processors in key tasks such as server-side Java applications operations/second for commerce (by 2.1 times) 1 and supporting 2 times the number of ERP users. 2 Meeting the virtualization challenge A key factor in reducing TCO is virtualization efficiency. Increased virtualization performance is an enabler of infrastructure consolidation, which translates into the ability to deploy hundreds more VMs. With space- and power-critical challenges, it’s imperative to fit the maximum compute into the smallest footprint, and here 4th Gen EPYC processors deliver quite remarkable performance figures. In a typical deployment scenario of 2,000 VMs, enterprises can replace a rack of 17 Intel Platinum 8490H servers with just 11 AMD EPYC 9654 servers. In hard figures, this adds up to 35% fewer servers, consuming 29% less energy annually, and cutting the enterprise’s annual CAPEX by up to 46%. 3 High performance like this not only enables CIOs to reduce both CAPEX and OPEX, it also plays directly into an enterprise’s drive for greater sustainability and lower carbon footprint. The cost of waiting is increasing all the time Updating to the newest generation of CPUs can improve a data center’s TCO. The latest CPUs are more efficient, allowing IT leaders to provide the same, or greater, level of performance with fewer servers, resulting in lower costs overall. As much as IT leaders are concerned about increasing CAPEX costs from servers that are already paid for, and adding to the problem with upgrades, the cost of doing nothing will very soon overtake the cost of modernizing. That means the cost of waiting is steadily increasing all the time. Robert Hormuth is CVP, Architecture and Strategy at the AMD Data Center Solutions Group. Footnotes: SP5-104: https://www.amd.com/en/claims/epyc4#SP5-104 SP5-056A: https://www.amd.com/en/claims/epyc4#SP5-056-A SP5TCO-036: https://www.amd.com/en/claims/epyc4#SP5TCO-036 The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,048
2,022
"Data observability company Cribl raises $150M | VentureBeat"
"https://venturebeat.com/data-infrastructure/data-observability-company-cribl-raises-150m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data observability company Cribl raises $150M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cribl , a data observability platform used by businesses such as Accenture, Domino’s and 7-Eleven, has raised $150 million in a series D round of funding. The raise comes as remote work has become a semi-permanent way of life for millions of employees around the globe and a “decentralized” workforce can make it more complex for companies to manage IT systems and data distributed across multiple locations. “Enterprises have no streamlined way to make use of all that data and are getting crushed by the cost of trying to,” Clint Sharp, Cribl’s cofounder and CEO, told VentureBeat. Throw into the mix the myriad digital transformation efforts that companies are adopting, combined with a growing need to engage with customers through a software-powered interface and it’s clear that companies will need to find ways to ensure minimal friction and maximum uptime. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Today, nearly every business is a ‘software business’ — whether you’re a bank or a retailer, software applications are now a primary way businesses interact with their customers and if businesses don’t provide a great experience on those apps, their customers are going to go elsewhere,” Sharp said. Visibility But where, exactly, does Cribl come into all of this? Well, Cribl occupies a space known as “observability,” which is concerned with giving companies visibility into their systems, including details of specific customer interactions such as when they opened an app, what menu options they selected and whether they encountered any errors in the process. It’s all about gleaning real-time insights into the internal state of an application by monitoring a vast array of telemetry data. Founded out of San Francisco in 2017, Cribl has hitherto offered four core products, including AppScope ; Cribl.Cloud ; Cribl Edge ; and the star of the show, Cribl Stream , is touted as an “observability pipeline” for transporting observability data between any source and destination. The broader observability sphere includes big-name incumbents such as Splunk , Snowflake and Elastic. But rather than compete directly with these platforms, it actually integrates with them, allowing businesses to get their logs, metrics and traces to and from any source. According to Sharp, the most direct competitors are open-source “build-your-own” solutions such as Kafka or FluentD , while its core USP is a “vendor-agnostic” approach that helps companies move all their machine data. “A common pain point we hear from our IT and security customers is that they’re using many tools across their functions, with data passing to and from all of them — but there’s no central point of control,” Sharp explained. “This creates more complexity and huge cost inefficiencies. Cribl’s suite of products is open and interoperable by design, meaning they can connect the disparate parts of the data ecosystem and give customers choice and control over all the event data that flows through their corporate IT systems.” Decentralized Today, Cribl has thrown a fifth product into the mix with the announcement of Cribl Search, which enables companies to conduct “search-in-place” queries where the data is created, rather than having to ingest and centralize it all first. This has important ramifications for real-time data access, particularly for security teams that might want to “eliminate blind spots” using instant telemetry data. This also follows a growing trend in the data infrastructure space, which has seen companies steadily embrace decentralized over centralized data platforms. However, the more disparate systems a company has in its stack, the harder it is to derive insights from the data they generate. In terms of how a company might use Cribl search, well, the use-cases are endless. A company with thousands of Kubernetes instances powering numerous types of applications can generate multiple terabytes of telemetry data each day. The time it takes to transport all that information into a centralized repository for deeper analysis and troubleshooting can be the difference between winning and losing customers. Cribl Search takes all this spadework to the source of the data, allowing users to search against data stored in the likes of Splunk, Elasticsearch, or OpenSearch. On top of that, it also allows users to search through data as it flows through Cribl Stream, or even when that data is stored “at rest” in what Cribl refers to as an “observability lake,” which is basically a data lake for log data. “Traditionally, if an application were to start performing poorly or encounter errors, the only way to debug that application is to forward the information and store it centrally,” Sharp said. “This creates unnecessary complexity and slows down the process of remediating the performance issue. With Cribl Search, you can troubleshoot directly on the edge, without having to move data first.” Prior to now, Cribl had raised $252 million and with a fresh $150 million in the bank, the company is well-financed to build out Cribl Search and ready it for public launch — the product is being made available today in private beta as part of an early access program. A source closed to the deal confirmed to VentureBeat that Cribl’s latest series D investment now values the company at a hefty $2.5 billion — a sharp hike on the $1 billion valuation it reported at its $200 million series C round less than a year ago. Cribl’s series D round was led by Tiger Global Management, with participation from Sequoia, Greylock, Redpoint Ventures, IVP and CRV. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,049
2,021
"Data observability platform Bigeye lands $45M | VentureBeat"
"https://venturebeat.com/data-infrastructure/data-observability-platform-bigeye-lands-45m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data observability platform Bigeye lands $45M Share on Facebook Share on X Share on LinkedIn Composable data and analytics Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data quality engineering platform Bigeye today announced that it closed a $45 million series B round led by Coatue, with participation from existing investors Sequoia Capital and Costanoa Ventures. The company plans to put the funding, which brings its total raised to $66 million, toward scaling its team and platform with a particular focus on creating collaborative data reliability workflows. Companies often struggle to manage vast pools of data stored across disparate systems on-premises and in private and public clouds. One study by PricewaterhouseCoopers and Iron Mountain found that while 75% of business leaders feel they’re “making the most of their information assets,” in reality, only 4% are set up for success. As the pandemic accelerates digital transformation and the data management stakes rise, data observability and monitoring tools have come into vogue. Eighty percent of teams within organizations are practicing, or intend to practice, observability within two year, according to a 2020 Honeycomb report. Bigeye was founded in 2019 by Kyle Kirwan and Egor Gryaznov, who managed Uber’s first data warehouse for reporting and data analysis. The San Francisco, California-based platform augments instruments data with monitoring and anomaly detection tools, enabling stakeholders to know the health of the data via APIs and visual dashboards. “With Bigeye, [we’ve] created a data observability platform that lets any company prevent customer-facing data outages, save expensive engineering hours, and build greater trust in the data,” Kirwan told VentureBeat via email. “The tools [we] developed helped Uber rapidly scale its data platform while ensuring reliability. Now, [we’re] applying those lessons and making them available to all companies, even those without Uber’s resources.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Anomaly detection As processes around data remain a hurdle in adopting technologies like AI, observability solutions like Bigeye are attracting investments. There’s Aporia , Monte Carlo , and WhyLabs , a startup developing a solution for model monitoring and troubleshooting. Another rival is Domino Data Lab , which claims to prevent AI models from mistakenly exhibiting bias or degrading. As for Bigeye, it can proactively detect and resolve data issues — automatically recommending and monitoring key data quality metrics. Under the hood, anomaly detection algorithms adapt to changes in businesses without requiring manual tuning. “In our mission to be the deepest and most accurate observability platform, Bigeye trains independent anomaly detection models for each data attribute tracked on the platform. Tens of thousands of unique models detect anomalies and learn from user feedback without requiring hand-tuning or guesswork. These models are the result of years of research and continue to be a key area of investment,” Kirwan added. In each of the last four quarters, Bigeye, which has a 23-employee workforce that it plans to roughly double to 40 by 2022, says it added to its existing roster of customers across ecommerce, education, and telecommunications. Instacart, Crux, and SignalFire, and Udacity are using Bigeye to monitor data behind their analytics tools, while Clubhouse and Rev.com are using it to prevent disruptive data pipeline problems. “We started our journey with Bigeye as a customer. We were impressed by the strength of the platform, their unique approach, and how that approach directly related to the potential size of Bigeye’s opportunity,” Caryn Marooney, general partner at Coatue, said in a statement. “We are looking forward to partnering with Kyle, Egor, and the entire team as they continue to scale.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,050
2,023
"How observability has changed in recent years, and what's coming next | VentureBeat"
"https://venturebeat.com/data-infrastructure/how-observability-has-changed-in-recent-years-and-whats-coming-next"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How observability has changed in recent years, and what’s coming next Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In recent years, businesses have become increasingly reliant on observability to manage and maintain complex systems and infrastructure. As systems become even more complex, observability must evolve to keep pace with changing demands. The big question for 2023: What’s next for observability? The proliferation of microservices and distributed systems has made it more difficult to understand real-time system behavior, which is critical for troubleshooting problems. Recently, more businesses have solved this problem with automations to monitor distributed architecture, deep dive tracking and real-time observability. However, each decade has brought a sea change in how observability is expected to function. The last three decades have seen transformation after transformation — from on-premise to cloud to cloud-native. With each generation has come new problems to solve, opening the door for new companies to form: On-premise cloud era led to a few companies like Solarwinds, BMC and CA Technology. The cloud era (where AWS came in) led to a shaking market, with new companies like Datadog, New Relic, Sumologic, Dynatrace, Appdynamic and more. The cloud-native era (starting in 2019-20) has resulted in another market shakeup. Why is observability changing? The main reason for the current shakeup is that businesses are building software using entirely different technology compared to 2010. Rather than monolithic architectures, they use microservices , Kubernetes and distributed architecture. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! There are three key reasons why this is the case: Better security Easy scalability More efficiency for distributed teams However, there are challenges as well. According to data from Gartner , 95% of systems will be cloud native by 2025. Since cloud native generates much more data than previous generations of technology, hosting and scaling those data becomes more challenging. This presents three major problems. 1. Prohibitive costs The first problem is relatively straightforward: Cost. All legacy observability companies have become so expensive that most startups and medium businesses can’t afford them. As a result, they’re using old technology to host and process their data — technology that can’t respond to needs in 2023. 2. Evolving priorities in observability Additionally, as the capabilities of observability have become more advanced, the KPIs and OKRs that development and operations teams track have evolved. Before, the primary focus was on ensuring applications and infrastructure didn’t crash. Now, dev and ops teams are operating at a deeper level, prioritizing: Request latency Saturation Scalability Traffic maps for where usage is happening Optimizing and predicting future outcomes How new code changes cloud usage In a sentence, dev and ops teams have become more proactive than reactive. This requires technology that can keep up. 3. Changing expectations for observability Finally, the rise of microservices architecture changes how IT teams observe application changes. One microservice can run across a hundred machines, and a hundred small services can run in one machine. There’s no “one-size-fits-all” approach. Dev and ops teams need deeper analysis to understand what is happening across their infrastructure. What will the new generation of observability tools need in 2023? These are the challenges. So how should the new generation of observability tools respond in 2023? From my perspective, here are eight things we will need to win the market. Note: I’m looking at a 30,000-foot view of a vast market. It’s unlikely that a single company will do all these things. But these are the needs, and it’s going to require new companies, technologies and platforms to meet them all. Unified observability All the legacy companies say they’re an unified observability platform. What this really means is that they have different tabs for metrics, logs and traces accessible from their platform. This doesn’t actually solve the problem. What dev and ops teams need is one place from which to view all this data in a single timeline. Only then will they be able to trace correlations and determine root causes to issues — and solve them quickly. Integrated observability and business data As Bogomil from Sequoia mentioned in this blog , most businesses don’t correlate their observability and business data. This is a problem because there are powerful insights to be gained from analyzing the two side by side. For example, Amazon recently found that if their website slows by one extra second, they lose millions of dollars daily. This can be huge for eCommerce businesses, especially if they track a slowdown in orders — it could be due to poor application performance. The faster they fix the application, the more orders they receive, and the more revenue they earn. The same goes for software companies. If the application is fast, this improves its usability, which improves user experience, which in turn impacts a number of business metrics. Only by integrating these two sets of data can businesses start to make these connections to improve the bottom line. Vendor-agnostic Open Telemetry (OTel) Companies are looking for a solution that doesn’t lock in one vendor. That’s how most tech companies are contributing to open telemetry (OTel) and making it the go-to tool for data collector agents. OTel has many benefits: interoperability, flexibility, and improved performance monitoring. Predictive observability In the AI era, everything is moving to become a human-less experience. This can enable systems to do the things that humans simply cannot, like predicting errors before they even happen via machine learning. This is not common in observability right now, and there is a major need for more innovation. By adding an AI layer to observability platforms, businesses can predict issues before they happen, and solve them before the user or customer even knows that something is wrong. Predictive security in observability Observability and security work very closely. Most observability companies are moving to security because they control all the data collected from applications and infrastructure. By reading metrics, logs and traces, specifically those that demonstrate unusual behavior, AI should be able to understand security threats. Most SEIM and XDR don’t do this. And even if they do, it’s a rule-based model rather than analyzing and learning from behaviors. Cost optimization Perhaps the biggest challenge in observability is cost. Although cloud storage is getting cheaper and cheaper, most observability companies aren’t lowering their prices to match. Customers get the short end of the stick, mainly because there are no alternatives. Open Telemetry collects over 200 points every second. However, we don’t need all these data points. So rather than charge users for storage they don’t need, organizations should collect and store only the useful ones and delete the rest. This can reduce the cost of storing and processing data. Correlation to causation analysis Most legacy observability platforms give basic information about what’s happening in the cloud or application. However, many times the inciting event takes place hours or even days before. As such, it’s important to monitor CI/CD pipelines to see when code gets pushed, as well as which regulation or request starts to create the problem. Let’s say there’s one network socket that’s slow, and it starts to clog requests. As a result, your backend starts to slow, which then produces an error. Then the front end slows, producing another error. Then the application crashes. You may only notice the front end slowing down and think that caused the application crash. But in reality, the problem started elsewhere. In a distributed architecture, this root cause analysis takes more time than in a monolith. Observability platforms need to adapt to this new reality. AI-based alerts Alert fatigue is a real challenge. When developers receive so many alerts that they mute email threads or Slack channels, this hides issues and slows down time to resolution. Instead, AI-based alert systems leverage AI to predict which alerts are essential and which are not. AI can also provide context and even suggest possible solutions. Final thoughts This is an exciting time to be in observability. The changes we’re seeing are opening the door to untold opportunities. The question remains: Who will rise to the top in 2023? Laduram Vishnoi is founder and CEO at Middleware. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,051
2,022
"Monte Carlo plans to scale data observability platform | VentureBeat"
"https://venturebeat.com/data-infrastructure/monte-carlo-attains-unicorn-status-plans-to-scale-data-observability-platforms"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Monte Carlo plans to scale data observability platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data, the lifeblood of all information technology that circulates among devices and carries business with it, is damaging to systems if loaded with impurities. With data stores filling up constantly and analytics apps dipping into them to find business value, San Francisco, California-based Monte Carlo intends to make certain that all data is clean, stored safely and ready to be used at any time, across any data store – cloud or on-premises. This requires a serious dose of data observability that the startup is already providing for several hundred enterprise clients. Monte Carlo’s machine learning-powered platform provides enterprise data analysts with a holistic view of data reliability for critical business and data product use cases in near real-time, the company’s chief engineer and cofounder, Lior Gavish told VentureBeat. The 3-year-old company announced today that it has raised $135 million in a series D round from a group of investors led by IVP, giving it a valuation of $1.6 billion. Its frontline product is a SOC-2 Type II certified Data Observability platform that operates with an intuitive user interface. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data observability, a hot VC space The IT data space has never been hotter in the venture capital world. In only the last year, BigQuery reached a $1.5 billion valuation; Snowflake hit $1.2 billion; Databricks came in at $800 million. Monte Carlo is merely the latest to follow this trend. The company claims to be the first data observability toolmaker to achieve a billion-dollar valuation, joining the ranks of Databricks, Fivetran, Starburst and dbt Labs as a data unicorn. Gavish told VentureBeat that the company intends to use the infusion of new capital to continue improving experiences for its hundreds of customers, scale the data observability category to new verticals and grow its U.S. and EMEA go-to-market and engineering teams. “Data is in a lot of places, right?” Gavish said. “Some of them are legacy. Some of them are in the modern data stack and some of them are up-and-coming, like streaming. Solving the (data) reliability problem cannot be done as a point solution. If you only control reliability in one part of the stack, you’re going to inevitably fail because reliability issues happen everywhere in every part of the stack that gets to process data.” Monte Carlo supports as much of the IT stack as is possible, Gavish said. “I’m trying to create as much observability as possible across the stack. And so we’re constantly working hand-in-hand with our customers to understand what are the data store stores and what are the data processing mechanisms that they are adopting. “We make sure that we support it in our solution; we also support all the major data warehouses, all the major data lake technologies, all the major BI tools, all the major orchestration tools. And we’re continuing to add and develop that based on the listener on demand from our customers,” Gavish said. Augmenting the future of data reliability As companies ingest more data and pipelines become increasingly complex, teams need a way to ensure that the data powering their decision-making and digital products is reliable and actionable, Gavish said. Problems that can result from data-quality issues getting too far into the production stream can be expensive to fix once they’re past a certain point in the use case. Mirroring the rise of application performance monitoring (APM) tools such as Datadog and New Relic to keep software downtime at bay, data observability solves the problem of data downtime by giving teams end-to-end coverage and visibility into data health across their modern data stack. Money in cloud databases In 2021, organizations spent $39.2 billion on cloud databases such as Snowflake, Databricks and Google BigQuery, yet Gartner estimates data downtime and poor data quality costs the average organization $12.9M per year. Monte Carlo research shows a correlation between data incidents and the amount of data an organization handles, with the average business experiencing at least one data incident for every 15 tables in their environment, Gavish said. “As companies continue to invest in technologies that drive smarter decision-making and power digital services, the need for high-quality data has never been higher,” Cack Wilhelm, General Partner at IVP, said in a media advisory. Since its series C announcement in August 2021, Monte Carlo more than doubled revenue each quarter and achieved 100 percent customer retention in 2021. On its list of several hundred customer companies are JetBlue, Gusto, Affirm, CNN, MasterClass, Auth0 and SoFi; partners include Snowflake, Databricks and dbt Labs. “It’s simply not enough to have data – it needs to be discoverable, accessible and reliable,” Barr Moses, CEO and cofounder of Monte Carlo, said in a media advisory. “Monte Carlo created the world’s leading data observability platform to accelerate the adoption of reliable data while reducing time to detection and resolution for data downtime.” The company is backed by Accel, GGV Capital, Redpoint Ventures, ICONIQ Growth, Salesforce Ventures, GIC Singapore and IVP. Competitors in the data observability market include ICT Reverse, Tuosi Technology, Mathematica and Zertifika General. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,052
2,023
"Want to mitigate cyber risk? Start with zero trust visibility | VentureBeat"
"https://venturebeat.com/security/want-to-mitigate-cyber-risk-start-with-zero-trust-visibility"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Want to mitigate cyber risk? Start with zero trust visibility Share on Facebook Share on X Share on LinkedIn Presented by Zscaler As a product of 1990s Australian culture, a distinct message has stuck with me: An advertisement from 1994 which stated simply, “you’ll never never know, if you never never go.” It refers to the vastness that is the Australian Outback, which is often called the Never Never—something so expansive and intimidating that one cannot comprehend it unless they explore it. And if they do, they tend to either change forever or never come back. This vastness applies to cybersecurity and IT in general: the exponential growth of information has led to an exponential growth of risk (of getting compromised, of having data stolen, etc.). Fully grasping the scope of risk becomes increasingly difficult, given the sheer number of risk vectors. And as enterprises accumulate more systems, services and functions, the amplification of information from system outputs like logs and traces eventually leads to paralysis. This can leave an organization unable to act on risk mitigation—waiting for an audit, regulation or (sadly) a breach merely to address the topic, much less solve it. All other things being equal, risk is generally proportional to complexity, meaning the “outback” of risk only gets worse the longer it is left unaddressed. (It’s no surprise that McKinsey projects the damage from cyberattacks will approach $10.5 trillion by 2025.) Understanding risk across the great IT expanse This idea of getting IT running and letting legacy risk lie, with the only real objective being to “keep the lights on,” has left large swaths of infrastructure and technology within the reach of bad actors. Merely keeping things running, without looking to new functions and opportunities, will provide a wealth of new avenues for cybercrime growth for a long time to come. Often, long-forgotten services offer attackers the sweetest, juiciest attack paths—if you don’t know that an application, set of applications or other service exists, you’d better believe someone will find it. It’s a fundamental truth in the zero trust world that if you have a service running that you don’t know about, you should treat it as already compromised. Navigating the Never Never with zero trust Zero trust isn’t just a buzzword: it provides fine-grained insight into access along all points in an enterprise. An effective zero trust deployment not only delivers controls, but also enables visibility into exactly what is being controlled. The zero trust environment provides a trail map for the outback, whether you “lift and shift” or implement a new architecture altogether. Zero trust states that nothing is allowed without first going through: Identification of the initiator, their context and where they are going Application of controls, which can include risk scoring, malicious content analysis, data protection, inspection and more, but at a minimum includes an authorization decision based on business need Enforcement of policy and connectivity via approved paths and conditions An audit trail for real-time correlation, post-event forensics and accountability From there, an enterprise would know: Initiator details (user, workload, thing, etc.) Destination details (application, location, service, function, etc.) The types of content being processed (intellectual property, malicious content, etc.) Path information and metrics; success or failure and quality of access Employing zero trust for granular control provides your enterprise with incredibly specific insight into where to “go.” Then, rather than cope with the aforementioned paralysis, you can leverage this knowledge in two distinct ways: The things you know: You can make some broad cuts into who or what has access. For instance, should the entire company have access to the financial system? Broad controls like this can quickly have a large impact. The things you don’t know: Here, leveraging the output of the zero trust service with an empowered business AI that acts based on your business requirements will help you make the most effective decisions for your enterprise. Enterprise benefits of zero trust in the IT outback A zero trust approach brings numerous benefits, including: Reduced risk Risk reduction is the biggest impact of comprehensive, contextual visibility into services. Visibility allows IT to put proper controls in place for exposed services, determine whether they are needed and whether they have the correct entitlements, and provide identity-based controls. Financial optimizations Strategic finance, purchasing and negotiation are powerful tools for any enterprise. However, most have recurring bills for redundant and overlapping components, hardware, software and services. Teasing apart this environment will highlight the redundancy and waste, not only saving money on deduplication, but also informing a more conscious, lean-forward negotiation and purchasing strategy. Business optimizations Complexity is costly. Simplifying processes reduces operational costs, waste (in the classic sense), power consumption, troubleshooting time for support issues, and points of failure in IT. Naturally, venturing into the risky Never Never will also let you address risk directly, enabling efficient risk reduction measures, lower insurance costs and the reduced likelihood and impact of cyber incidents. Future-proofing and design The world runs on data. Companies and services act based upon the information and analytics in front of them, so focusing on a data-first architecture—and removing extraneous legacy structures—will ensure future readiness and usability. Thus, understanding available data as well as how to present and optimize it, and then leveraging tools like large language models (LLMs) to drive informed decisions, will be the status quo. Critically, this requires not just information, but accurate information. We’ll look at this in depth in an upcoming post. Competitive agility Optimizations and enhancements, driven by data and data analytics, rely on accurate data. So, accurate insights to your enterprise ecosystem will help you take informed steps to stay competitive. Control and insight of infrastructure means faster adoption of new technologies, easier integration of third-party code and tools, faster iteration and deployment of first-party code, and more flexibility in IT and R&D. This frees up resources from technical and security debt and enables both security as a competitive advantage and investment in more of the in-market features and development that matter to end users and customers. Environmental, social and governance (ESG) responsibility Done correctly, IT efficiencies aren’t just niceties that reduce financial costs, but also directly correlate to a lighter carbon footprint, improving the literal environment we all call home and the living conditions of people all over the world. Identifying services and solutions that are no longer needed, or introducing optimization, is a core ESG responsibility in direct pursuit of achieving ESG goals. You’ll never never know, if you never never go Insight, content, visibility and—ultimately—knowledge set enterprises up for success in a modern world. Adding to, encouraging and even allowing the cyber risk Never Never is no longer acceptable. Whilst “going and knowing” may cause some palpitations, it is the responsibility of every organization to get out of its comfort zone and become informed, and ultimately decisive. To see how Zscaler is helping its customers reduce business risk, improve user productivity and reduce cost and complexity, visit zscaler.com/platform/zero-trust-exchange. Nathan Howe is VP, Emerging Technology + 5G at Zscaler. Sanjit Ganguli is Field CTO and VP, Transformation Strategy at Zscaler. Sam Curry is VP, CISO at Zscaler. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,053
2,023
"Dynatrace taps hypermodal AI for faster application observability | VentureBeat"
"https://venturebeat.com/ai/dynatrace-taps-hypermodal-ai-for-faster-application-observability"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dynatrace taps hypermodal AI for faster application observability Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Massachusetts-headquartered Dynatrace , which provides an observability layer to monitor and optimize application development , performance and security, today announced the expansion of its Davis AI engine with new generative AI capabilities. The move, Dynatrace says, transforms Davis into the industry’s first hypermodal AI, converging its unique approach to causal AI and predictive intelligence capabilities with generative AI. Davis will now provide enterprises with personalized recommendations based on their unique cloud data, speeding up some work and enabling people to focus on higher-value activities for faster application innovation. “With the release of the expanded Davis AI, we address … [and] redefine how observability and security solutions work. We expect Davis will enable our customers to achieve 10-20% productivity improvements year-over-year as they drive transformation initiatives related to observability and security,” Bernd Greifeneder, CTO at Dynatrace, said in a statement. Dynatrace Davis AI for observability Modern enterprise applications, and the environments they run in, are becoming increasingly complex. Thousands of services, millions of lines of code and trillions of dependencies are active at the same time. With Davis AI sitting at the heart of the Dynatrace platform, enterprises can monitor and keep these complex ecosystems up and running. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! >>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands. << Since its early days, Davis has been using causal AI to analyze observability and security data within the Dynatrace Grail data lakehouse and dependencies mapped by Dynatrace Smartscape topology, to deliver context-rich anomaly detection and automation necessary for issue prevention, remediation and root-cause analysis. The engine also uses predictive intelligence to anticipate future behavior based on past data and observed patterns, allowing teams to address potential application issues that may surface down the line. The new Davis Copilot With the latest move, Dynatrace is adding Davis Copilot to the AI party. The offering works in conjunction with Davis AI’s existing causal and predictive AI capabilities to automatically provide users with recommendations on how to solve specific issues in the context of their environment and situation. But Davis Copilot does not use external data to work. Instead, the generative AI recommendations are fueled by precise anomaly context from causal and predictive AI that reflect unique attributes of each organization’s hybrid and multi-cloud ecosystem. This approach ensures highly relevant results and helps boost productivity across business, development, security and operations teams. As Greifeneder noted, generative AI is a transformative technology in itself, but combining it with additional AI techniques like causal and predictive AI in this way paves the way to make the most of it. “This is because only causal AI can deterministically know the root cause of an issue; only predictive AI can see into the future reliably; and only generative AI can tailor recommendations and solutions to specific problems using advanced probabilistic algorithms,” he said. Once the recommendations are generated, Davis Copilot also creates suggested automation workflows and dashboards to help users quickly address the issue at hand. In addition, users get the option to use the Copilot directly in natural language to explore, solve or complete specific tasks. Currently, Davis AI continues to be a part of the main Dynatrace platform, but the new Copilot bit of the engine remains out of reach. The company says it will make the expanded offering available sometime later this year. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,054
2,023
"Why agility and portability have become key drivers of multicloud investments | VentureBeat"
"https://venturebeat.com/data-infrastructure/why-agility-and-portability-have-become-key-drivers-of-multicloud-investments"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why agility and portability have become key drivers of multicloud investments Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. One of the top areas of interest at Sierra Ventures’ 17th CXO Summit was the multicloud strategy being adopted by enterprises. While a significant majority of the more than 40 executives polled are using multiple cloud service providers, they were all very interested in the agility and portability of their cloud environments. As enterprises grapple with the challenges of moving to or thriving in the cloud, they are hyper-focused on these two characteristics. Multicloud: Minimizing risk, optimizing performance In the context of multicloud, agility and portability together enable companies to minimize business risk with high availability; lower costs through vendor leverage (avoiding lock-in); and optimize performance by choosing each provider’s best features. Data portability is also a hot-button topic because of requirements like the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), both requiring seamless data transfers between platforms. In this environment, agility and portability play important roles in investment opportunities, even in light of the heavy commitments to their own platforms that cloud service providers have made. Companies like HashiCorp have built businesses that make it easier for organizations to provision, secure, connect and run their infrastructure and applications in any environment and across multiple environments through Infrastructure as Code (IaC). Given the massive DevOps movement, containers and the need for data portability have created tremendous opportunities for startups. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A container, by design, is more portable and agile than virtual machines (VMs), allowing it to play an important role in a multicloud strategy. Enterprises are trying to containerize as much as possible, and most new application projects are container-based. But they will have to coexist with legacy VM-based workloads for a long time. Enter Kubernetes Kubernetes has emerged as the container orchestration engine of choice, and there is a growing demand for solutions that move beyond the legacy open-source Rancher and Cluster-API-based VMware Tanzu, and toward Spectro Cloud and other new enterprise Kubernetes management solutions. Through Kubernetes operators and integration with Terraform and Crossplane, even external resources such as VMs can be managed using Kubernetes. Data portability is needed because it prevents data from being trapped in a single platform or application. At the consumer level, it means the right to migrate data from one platform to another and it prevents individuals from being tied to one provider. But at the enterprise level, large-volume data stores can hinder portability. This can be solved by data replications across clouds and regions or by using modern cloud-native databases such as CockroachDB, Yugabyte, TiDB and MongoDB, which can easily handle cross-region data replications and queries. This space will continue to generate innovation, given the massive opportunities and dollars available. Multicloud: On the edge One emerging area of high interest is the edge, which is becoming an extension of multicloud. For example, edge locations are quickly becoming the next phase of multicloud because more and more data is generated at an edge and organizations need a low-latency solution to compute and process all this data. Running Kubernetes at the edge (like local AI/ML data processing) is becoming a more important part of the enterprise multicloud and digital transformation strategy. For example, GE Healthcare is processing large volumes of medical imaging data at the edge for performance and regulatory reasons. However, edge locations may not have skilled operations personnel or a cloud LaaS endpoint, and the premise is much less secure than the cloud provider’s data center. This makes edge management very challenging but also a big opportunity. With 5G and other developments, the edge is rapidly becoming a new battleground in the cloud wars. A unified operation When adopting a multicloud strategy, maintaining consistency across the spectrum is key to reducing management complexities. Whether it is RBAC, IAM, infrastructure management, workload management, or network and security policy, a tool that offers unified operation across the multicloud should be a priority. Of course, each cloud still has its nuances, but a declarative approach, with design-once and deploy/manage-anywhere capability, can solve the problem of multicloud management at scale. Tools such as Terraform, Pulumi, Crossplane and Cluster API (CAPI) all follow these design principles. Agility and portability are driving multicloud investments in large enterprises, as a consistent operating model is increasingly important across that heterogeneous fleet of cloud platforms. To that end, CXOs need to focus on three key elements: next-generation Kubernetes management tools to navigate their multicloud environments; platforms to help with large-scale data portability; and finally, solutions to solve for the growing compute requirements at the edge. We believe that we are in the early innings of a very large opportunity created by this major disruption. Mark Fernandes is managing partner at Sierra Ventures. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,055
2,021
"Latest big data developments in the realm of data lakehouse | VentureBeat"
"https://venturebeat.com/business/latest-big-data-developments-in-the-realm-of-data-lakehouse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Latest big data developments in the realm of data lakehouse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. I recently wrote a post about the concept of the data lakehouse , which in some ways, brings components of what I outlined in the my rant about databases and what I wanted to see in a new database system. In this post, I am going to make an attempt to describe a roll-up of some recent big data developments that you should be aware of. Let’s start with the lowest layer in the database or big data stack, which in many cases is Apache Spark as the processing engine powering a lot of the big data components. The component itself is obviously not new, but there is an interesting feature that was added in Spark 3.0, which is the Adaptive Query Execution (AQE). This features allows Spark to optimize and adjust query plans based on runtime statistics collected while the query is running. Make sure to turn it on for SparkSQL (spark.sql.adaptive.enabled) as it’s off by default. The next component of interest is Apache Kudu. You are probably familiar with parquet. Unfortunately, parquet has some significant drawbacks, like it’s innate batch approach (you have to commit written data before it’s available for read). Specifically when it comes to real-time applications. Kudu’s on-disk data format closely resembles parquet, with a few differences to support efficient random access as well as updates. Also notable is that Kudu can’t use cloud object storage due to it’s use of Ext4 or XFS and the reliance on a consensus algorithm which isn’t supported in cloud object storage ( RAFT ). At the same layer in the stack as Kudu and parquet, we have to mention Apache Hudi. Apache Hudi , like Kudu, brings stream processing to big data by providing fresh data. Like Kudu it allows for updates and deletes. Unlike Kudu though, Hudi doesn’t provide a storage layer and therefore you generally want to use parquet as its storage format. That’s probably one of the main differences, Kudu tries to be a storage layer for OLTP whereas Hudi is strictly OLAP. Another powerful feature of Hudi is that it makes a ‘change stream’ available, which allows for incremental pulling. With that it supports three types of queries: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Snapshot Queries : Queries see the latest snapshot of the table as of a given commit or compaction action. Here the concepts of ‘copy on write’ and ‘merge on read’ become important. The latter being useful for near real-time querying. Incremental Queries : Queries only see new data written to the table, since a given commit/compaction. Read Optimized Queries : Queries see the latest snapshot of table as of a given commit/compaction action. This is mostly used for high speed querying. The Hudi documentation is a great spot to get more details. And here is a diagram I borrowed from XenoStack : What then is Apache Iceberg and the Delta Lake then? These two projects yet another way of organizing your data. They can be backed by parquet, and each differ slightly in the exact use-cases and how they handle data changes. And just like Hudi, they both can be used with Spark and Presto or Hive. For a more detailed discussion on the differences, have a look here and this blog walks you through an example of using Hudi and Delta Lake. Enough about tables and storage formats. While they are important when you have to deal with large amounts of data, I am much more interested in the query layer. The project to look at here is Apache Calcite which is a ‘data management framework’ or I’d call it a SQL engine. It’s not a full database mainly due to omitting the storage layer. But it supports multiple storage engines. Another cool feature is the support for streaming and graph SQL. Generally you don’t have to bother with the project as it’s built into a number of the existing engines like Hive, Drill, Solr, etc. As a quick summary and a slightly different way of looking at why all these projects mentioned so far have come into existence, it might make sense to roll up the data pipeline challenge from a different perspective. Remember the days when we deployed Lambda architectures ? You had two separate data paths; one for real-time and one for batch ingest. Apache Flink can help unify these two paths. Others , instead of rewriting their pipelines, let developers write the batch layer and then used Calcite to automatically translate that into the real-time processing code and to merge the real-time and batch outputs, used Apache Pinot. (Source: LinkedIn Engieering ) The nice thing is that there is a Presto to Pinot connector, allowing you to stay in your favorite query engine. Sidenote: don’t worry about Apache Samza too much here. It’s another distributed processing engine like Flink or Spark. Enough of the geekery. I am sure your head hurts just as much as mine, trying to keep track of all of these crazy projects and how they hang together. Maybe another interesting lens would be to check out what AWS has to offer around databases. To start with, there is PartiQL. In short, it’s a SQL-compatible query language that enables querying data regardless of where or in what format it is stored; structured, unstructured, columnar, row-based, you name it. You can use PartiQL within DynamoDB or the project’s REPL. Glue Elastic views also support PartiQL at this point. Well, I get it, a general purpose data store that just does the right thing, meaning it’s fast, it has the correct data integrity properties, etc, is a hard problem. Hence the sprawl of all of these data stores (search, graph, columnar, row) and processing and storage projects (from hudi to parquet and impala back to presto and csv files). But eventually, what I really want is a database that just does all these things for me. I don’t want to learn about all these projects and nuances. Just give me a system that lets me dump data into it and answers my SQL queries (real-time and batch) quickly… VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,056
2,023
"An open data lakehouse will maintain and grow the value of your data | VentureBeat"
"https://venturebeat.com/data-infrastructure/an-open-data-lakehouse-will-maintain-and-grow-the-value-of-your-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest An open data lakehouse will maintain and grow the value of your data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Are recession fears eating at you? Worried about all your digital transformation investments evaporating like so much dew in the morning sun? That’s a natural way to feel. After all, the digital transformation journey is fraught with obstacles. And the challenging task of extracting value from growing repositories of data sometimes gets put on the back burner. Fortunately, you don’t need to be stuck in fear and worry; these suggestions can help prevent your company’s precious data from going to waste. Step 1: Get your data out of the cost center Even though “everyone” says that data is the big shiny key that will unlock productivity and competitiveness and all the trappings of business success, in practice — that is, in action, not just words — data and data analytics are relegated to the “cost of doing business” side of the ledger. This categorization triggers a race to the bottom, as organizations try to find the cheapest ways to wring value from their data. In most cases, it means outsourcing this business-critical function to lower and lower bidders. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Resist this trend. Start treating data and the systems and people that work with it as the business assets they are. How? Try exposing sterilized or carefully curated versions of your data to customers and clients, as dashboards, for instance. Make your data useful to them, and they will pay you for access. Utilizing the low-cost, high-availability object stores and robust built-in security frameworks that the cloud vendors provide makes this a much simpler and more cost-effective undertaking than it has ever been previously. When you’re no longer merely spending money to generate and store and move and analyze data, you can put your data to work. You’ll probably find it’s really good at earning its keep. Step 2: Keep your data options (and your infrastructure) open I know this one might sound scary. Too often, people think open — as in open-source — means unprotected, unmanageable or just too much effort. I’d argue that with the speed of technological advancements hammering us from all directions, the advantages of openness seem hard to argue against. They include: No vendor lock-in, which can save you beaucoup money over time. Flexibility to adopt — and, just as importantly, jettison — technologies or solution pieces according to what you need and when you need them. Futureproofing, because unless you’ve found a perfect crystal ball somewhere (and if so, what are you doing reading this article?), there’s no way to predict what will happen next year or next decade or even next week. Communities with open governance in which you and your company can participate and actually help shape the future. And yes, these benefits of openness apply in full measure to data and databases. An open data format coupled with an open source query engine delivers the reliability and performance of a data warehouse; the flexibility and better price/performance of a data lake; the freedom of non-proprietary SQL query processing and data storage; and the governance, discovery, quality and security you need. Unlike in the early database days of the 1970s when companies could choose among a handful of SQL-based relational database management systems, you are not tied to a single vendor. By uncoupling storage and compute, data lakes let you piece together a solution that takes best advantage of the amount and types of data you actually use. In addition to SQL processing, you can do machine learning (ML) and AI, if that’s your thing. A data lake is flexible, elastically scalable and cost effective. Meaning that now is pretty much a golden era of data analytics. But — and you knew there was going to be a “but” — the flexibility of data lakes can make them disorganized and hard to manage. Plus, the lack of data consistency in data lakes makes it hard to enforce reliability and security. Here’s the analogy: A data warehouse is a group of sled dogs tied together and moving along snowy terrain in the same direction, while a data lake is more like a menagerie of various breeds of dogs running around in different directions. And sure, these latest databases can scale like crazy, yet they still don’t solve all the cost issues because they link data storage with compute. So as your data grows, so do your processing and/or cloud infrastructure costs. And the complexity of managing these systems? Forget about it if you don’t have an army of IT admins and acres of data centers brimming with millions of twinkling lights. Step 3: Employ a data lakehouse So here’s how to take advantage of all the data flowing through your organization’s digital transformation pipelines and bring together open-source systems and the cloud to maximize the utility of the data. Use an open data lakehouse designed to meld the best of data warehouses with the best of data lakes. That means storage for any data type, suitable for both data analytics and ML workloads, cost-effective, fast, flexible and with a governance or management layer that provides the reliability, consistency and security needed for enterprise operations. Keeping it “open” (using open-source technologies and standards like PrestoDB, Parquet and Apache HUDI) not only saves money on license costs, but also gives your organization the reassurance that the technology that backs these critical systems is being continuously developed by companies that use it in production and at scale. And as technology advances, so will your infrastructure. Remember, you’ve already invested mightily in data transformation initiatives to remain competitively nimble and power your long-term success. By shifting your relationship to data from a cost center to a profit center and by employing an open data lakehouse in your operations, you will increase the chances of your data ecosystem paying dividends. Rachel Pedreschi is head of technical services at Decodable. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,057
2,023
"Databricks reinforces commitment to truly open data lakehouses with Delta Lake 3.0 | VentureBeat"
"https://venturebeat.com/data-infrastructure/databricks-reinforces-commitment-to-open-data-lakehouses-with-delta-lake-3-0"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Databricks reinforces commitment to truly open data lakehouses with Delta Lake 3.0 Share on Facebook Share on X Share on LinkedIn Ali Ghodsi, the CEO and cofounder of Databricks. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As enterprises continue to double down on data lakehouses , data and AI company Databricks is shifting gears with Delta Lake, the open-source framework serving as the foundation to store data and tables in its own lakehouse offering. Today, at its annual conference, the lakehouse vendor announced the launch of Delta Lake 3.0, which features automatic support for competing Apache Iceberg and Hudi table formats. The move, the company says, will allow enterprise users to eliminate complicated integration work and focus on building truly open data lakehouses. “Customers shouldn’t be limited by their choice of (table) format,” said Databricks cofounder and CEO Ali Ghodsi. “With this latest version of Delta Lake , we’re making it possible for users to easily work with whatever file formats they want, including Iceberg and Hudi, while still accessing Delta Lake’s industry-leading speed and scalability.” Delta Lake 3.0 also includes Delta Kernel, an initiative that makes it easier to develop and maintain Delta connectors, and Liquid Clustering for cost-effective data clustering even as datasets grow. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Unification play from Databricks After the initial rise of first-generation Apache Hive, three open table formats have largely dominated the data ecosystem: Delta Lake, Apache Iceberg and Apache Hudi. While each of these formats has its own core strength with support for common file formats like Parquet to efficiently handle analytic workloads, data platform vendors have been focusing on one primary table format (like competitor Snowflake’s support for Iceberg) while providing connector support for the others. This meant users had to choose one of the three and engage in complicated integration work. Now, with the release of Delta Lake 3.0, there’s no need to compromise anymore, according to Databricks. The company is adding Universal Format (UniForm), which offers automatic support for Iceberg and Hudi within Delta, enabling greater interoperability across ecosystems and making it possible for data originating elsewhere to be pulled into Delta Lake. Databricks’ support of the three formats keeps it firmly in the lead in the push toward openness and simplicity. Microsoft recently pushed forward with a commitment to Delta Lake with its new Microsoft Fabric offering. (Editor’s note: Come learn more about data generative AI in the enterprise at VB Transform on July 11 & 12 in San Francisco, our networking event for enterprise technology decision makers focused the explosive technology.) When using UniForm, data stored in Delta Lake can be read from as if it were Iceberg or Hudi. The capability automatically generates the metadata needed for Iceberg or Hudi and unifies the table formats, saving users from the hassle of choosing or doing manual conversions between formats. “With Delta Lake 3.0, Databricks is providing unification of metadata between these formats, while expanding access to a much broader ecosystem of connectors query tools,” Adam Ronthal, VP Analyst for data management and analytics at Gartner told VentureBeat. “The biggest impact here will be in the ability to share metadata between these formats as part of a broader data ecosystem.” What’s more in Delta Lake 3.0? In addition to the Universal Format, Delta Lake 3.0 includes Delta Kernel and Delta Liquid Clustering. Delta Kernel is designed to tackle the hassle of reworking Delta connectors with each new version or protocol change. With just one stable API, the offering will ensure that connectors are built against a core Delta library that implements the latest specifications. Meanwhile, Liquid Clustering introduces a flexible data layout technique that will provide cost-efficient data clustering as data grows, helping companies meet their read-and-write performance requirements. “Delta Lake 3.0, including Universal Format and Kernel, underlines the open source community’s dedication to enhancing data reliability and delivering advanced analytics,” said Mike Dolan, SVP of projects at The Linux Foundation. “This release is a step forward in creating a community-driven ecosystem of data integrity, seamless collaboration and real-time analytics tools.” According to statistics from Databricks, Delta Lake garners more than a billion downloads per year as well as regular feature updates from contributing engineers across businesses like AWS, Adobe, eBay, Twilio and Uber. Databricks’ Data and AI Summit runs through June 29 in San Francisco. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,058
2,022
"Why data lakehouses are the key to growth and agility | VentureBeat"
"https://venturebeat.com/data-infrastructure/more-organizations-see-data-lakehouses-as-the-key-to-growth-and-agility"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why data lakehouses are the key to growth and agility Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As organizations ramp up their efforts to be truly data-driven, a growing number are investing in new data lakehouse architecture. As the name implies, a data lakehouse combines the structure and accessibility of a data warehouse with the massive storage of a data lake. The goal of this merged data strategy is to give every employee the ability to access and employ data and artificial intelligence to make better business decisions. Many organizations clearly see lakehouse architecture as the key to upgrading their data stacks in a manner that provides greater data flexibility and agility. Indeed, a recent survey by Databricks, a cloud data platform provider, found that nearly two-thirds (66%) of survey respondents are using a data lakehouse. And 84% of those who aren’t using one currently, are looking to do so. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “More businesses are implementing data lakehouses because they combine the best features of both warehouses and data lakes, giving data teams more agility and easier access to the most timely and relevant data,” says Hiral Jasani, senior product marketing manager at Databricks. There are four primary reasons why organizations adopt data lakehouse models, Jasani says: Improving data quality (cited by 50%) Increasing productivity (cited by 37%) Enabling better collaboration (cited by 36%) Eliminating data silos (cited by 33%) Impacts of a data lakehouse architecture on data quality and integration Building a modern data stack on lakehouse architecture addresses data quality and data integration issues. It leverages open-source technologies, employs data governance tools and includes self-service tools to support business intelligence (BI), streaming, artificial intelligence (AI), and machine learning (ML) initiatives, Jasani explains. For example, because data lakes store a large volume of raw data in different formats, they are particularly difficult to secure and govern. To address the complexity of managing it, delta lakes sit on top of data lakes to improve performance and help ensure data consistency and reliability. “ Delta lake , an open, reliable, performing and secure data storage and management layer for the data lake, is the foundation and enabler of a cost-effective, highly scalable lakehouse architecture,” Jasani says. Delta Lake supports both streaming and batch operations, Jasani notes. It eliminates data silos by providing a single home for structured, semi-structured and unstructured data. This should make analytics simple and accessible across the organization. It allows data teams to incrementally improve the quality of the data in their lakehouse until it is ready for downstream consumption. “Cloud also plays a large role in data stack modernization,” Jasani continues. “The majority of respondents (71%) reported that they have already adopted cloud across at least half their data infrastructure. And 36% of respondents cited support across multiple clouds as a top critical capability of a modern data technology stack.” How siloed and legacy systems hold back advanced analytics The many SaaS platforms that organizations rely on today generate large volumes of insightful data. This can provide a huge competitive advantage when managed properly, Jasani says. However, many organizations use siloed, legacy architectures which can prevent them from optimizing their data. “When business intelligence (BI), streaming data, artificial intelligence and machine learning are managed in separate data stacks, this adds further complexity and problems with data quality, scaling, and integration,” Jasani says. Legacy tools cannot scale to manage the increasing amount of data, and as a result, teams are spending a significant amount of time preparing data for analysis rather than actually gleaning insights from their data. On average, the survey found that respondents spent 41% of their total time on data analytics projects dedicated to data integration and preparation. In addition, learning how to differentiate and integrate data science and machine learning capabilities into the IT stack can be challenging, Jasani says. The traditional approach of standing up a separate stack just for AI workloads doesn’t work anymore due to the increased complexity of managing data replication between different platforms, he explains. Poor data quality affects nearly all organizations Poor data quality and data integration issues can result in serious negative impacts on a business, Jasani says. “Almost all survey respondents (96%) reported negative business effects as a result of data integration challenges. These include lessened productivity due to the increased manual work, incomplete data for decision making, cost or budget issues, trapped and inaccessible data, a lack of a consistent security or governance model, and a poor customer experience.” Moreover, there are even greater long-term risks of business damage, including disengaged customers, missed opportunities, brand value erosion, and ultimately bad business decisions, Jasani says. Related to this, data teams are looking to implement a modern data stack to improve collaboration (cited by 46%). The goal is free flow of information enabling data literacy and trust across an organization. “When teams can collaborate with data, they can share metrics and objectives to have an impact in their departments. The use of open source technologies also fosters collaboration as it allows data professionals to leverage the skills they already know and use tools they love,” Jasani says. “Based on what we’re seeing in the market and hearing from customers, trust and transparency are cultural challenges facing almost every organization when it comes to managing and using data effectively,” Jasani continues. “When there are multiple copies of data living in different places across the organization, it’s difficult for employees to know what data is the latest or most accurate, resulting in a lack of trust in the information.” If teams can’t trust or rely on the data presented to them, they can’t pull meaningful insights that they feel confident in, Jasani says. Data that is siloed across different business functions creates an environment where different business groups are utilizing separate data sets, when they all should be working from a single source of truth. Data lakehouse models and advanced analytics tools Organizations considering lakehouse technology are typically those that want to implement more advanced data analytics tools. These organizations are likely handling many different formats for raw data on inexpensive storage. This makes lakehouse technology more cost-effective for ML/AI uses, Jasani explains. “A data lakehouse that is built on open standards provides the best of data warehouses and data lakes. It supports diverse data types and data workloads for analytics and artificial intelligence. And, a common data repository allows for greater visibility and control of their data environment so they can better compete in a digital-first world. These AI-driven investments can account for a significant increase in revenue and better customer and employee experiences,” Jasani says. To achieve these capabilities and address data integration and data quality challenges, survey respondents reported that they plan to modernize their data stacks in several ways. These include implementing data quality tools (cited by 59%), open source technologies (cited by 38%), data governance tools (cited by 38%) and self-service tools (cited by 38%). One of the important first steps to modernizing a data stack is to build or invest in infrastructure that ensures data teams can access data from a single system. In this way, everyone will be working off the same up-to-date information. “To prevent data silos, a data lakehouse can be utilized as a single home for structured, semi-structured and unstructured data, providing a foundation for a cost-effective and scalable modern data stack,” Jasani notes. “Enterprises can run Al/ML and BI/analytics workloads directly on their data lakehouse, which will also work with existing storage, data and catalogs so organizations can build on current resources while having a future-proofed governance model.” There are also several considerations that IT leaders should factor into their strategy for modernizing their data stack, Jasani says. These include whether they want a managed or self-managed service, product reliability to minimize downtime, high-quality connectors to ensure easy access to data and tables, timely customer service and support, and product performance capabilities to handle large volumes of data. Additionally, leaders should consider the importance of open, extendable platforms that offer streamlined integrations with their data tools of choice and enable them to connect to data wherever it lives, Jasani recommends. Finally, Jasani says, “there is a need for a flexible and high-performance system that supports diverse data applications including SQL analytics, real-time streaming, data science and machine learning. One of the most common missteps is to use multiple systems — a data lake, separate data warehouse(s), and other specialized systems for streaming, image analysis, etc. Having multiple systems adds complexity and prevents data teams from accessing the right data for their use cases.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,059
2,023
"Oracle founder Larry Ellison confirms new gen AI service with Cohere during earnings call | VentureBeat"
"https://venturebeat.com/data-infrastructure/oracle-founder-larry-ellison-confirms-new-gen-ai-service-with-cohere-during-earnings-call"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Oracle founder Larry Ellison confirms new gen AI service with Cohere during earnings call Share on Facebook Share on X Share on LinkedIn Image Credit: Screenshot / Oracle OpenWorld Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Oracle Corp., the software giant known for its database technology, is joining the chorus of enterprise cloud vendors betting big on generative AI services. On Monday, the company revealed that it was developing a new cloud service with Cohere , a Toronto-based startup that specializes in building and training large language models (LLMs). Oracle’s founder and chief technology officer, Larry Ellison, confirmed the partnership during the company’s fourth-quarter earnings call, where he also reported strong growth in Oracle’s cloud business. Ellison said that Oracle and Cohere were working together to make it easy for enterprise customers to train their own customized LLMs using their private data, while protecting their data privacy and security. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The news has been rumored for quite some time given the close relationship between both companies. Just last week, Oracle was part of Cohere’s $270 million series C funding round that valued the startup at around $2.2 billion. Other investors in the round included Nvidia Corp., Salesforce Ventures, Deutsche Telekom AG and SentinelOne Inc. “Cohere and Oracle are working together to make it very, very easy for enterprise customers to train their own specialized large language models while protecting the privacy of their training data,” Ellison said on his company’s earning call. “Over the next few years, lots of companies are going to train their own specialized large language models.” (Cohere co-founder Nick Frosst will be speaking at VB Transform , a networking event for technical decision-makers, in San Francisco on July 11 and 12 focused on generative AI in the enterprise.) Ellison revealed that Oracle’s own internal application development teams are already using the new Cohere AI cloud service running on Oracle Cloud Infrastructure (OCI). Ellison said that Oracle used its own private data in order to fine-tune and extend the existing Cohere LLMs. Ellison said that so far, the supplementary training has led to two new specialized LLMs, one for medical professionals and one for first responders. While Oracle is well known for its database technology, it’s also a large player in the healthcare space, after its 2022 acquisition of healthcare giant Cerner. “Specialized large language models will be instrumental in helping highly trained professionals use their precious time more efficiently,” Ellison said. Oracle is no stranger to the world of AI While the upcoming service with Cohere is new, Oracle is quite familiar with the world of AI. In fact, Ellison made sure to emphasize during the earnings call that Cohere is using Oracle Cloud for training LLMs. Ellison said that Oracle has an edge over its competitors because it has more experience and expertise in handling large amounts of data securely and efficiently. Other vendors that have publicly revealed they use Oracle Cloud for training LLMs include Adept AI Labs , which raised $350 million in March for a generative AI service for using software. Oracle also has a cloud AI partnership with Nvidia, which involves Nvidia GPU hardware and Nvidia using the Oracle Cloud to help with ongoing AI development. All told, Ellison boasted that Oracle Cloud is already a multi-billion-dollar business for AI workloads. “In the aggregate, our generative AI cloud customers have recently signed contracts to purchase more than $2 billion of capacity in Oracle’s Gen2 Cloud,” Ellison said. While the numbers are large and growing, in the cloud business Oracle still trails behind the big three hyperscalers, which all have their own generative AI services. Amazon Web Services (AWS) announced its Bedrock generative AI services in April, Google has a host of its own services and models that were updated at its recent I/O conference, and Microsoft benefits from its tight partnership with OpenAI. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,060
2,023
"What does your cloud cost? CloudZero raises $32M to help businesses answer this question and reduce spend | VentureBeat"
"https://venturebeat.com/data-infrastructure/what-does-your-cloud-cost-cloudzero-raises-32m-to-help-businesses-answer-this-question-and-reduce-spend"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What does your cloud cost? CloudZero raises $32M to help businesses answer this question and reduce spend Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It’s become a fact of modern business, at least if your company has a digital presence: Large or small, established or insurgent, no matter the industry, you likely need to think about cloud data storage — and the associated costs. So it makes sense that CloudZero , a B2B firm that makes tools to help business customers understand and control their cloud costs, today announced $32 million in Series B funding to further expand its cloud intelligence platform. “As the cloud continues to become an integral part of businesses, CloudZero is stepping up to make cloud cost management simpler, more effective and more efficient,” said Phil Pergola, CEO of CloudZero. “Our platform is purpose-built for engineers, aligning the interests of finance, operations and engineering teams and driving significant cost savings.” The new funding round was spearheaded by Innovius Capital and Threshold Ventures, with additional contributions from existing investors Matrix Partners, Underscore VC and G20 Ventures. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! It’s a good time to be in the cloud business CloudZero expects cloud spending to reach a record $600 billion in 2023, with 73% of companies reporting cloud costs as a major concern at the board level. CloudZero has positioned itself as a frontrunner in this space by offering greater visibility into cloud costs across a multitude of providers. Its platform empowers software teams to manage cloud spending at a granular level, helping them understand how specific product features or customer interactions impact their bottom line. The real-time data enables proactive cost optimization, limiting wastage by identifying costly code lines or SQL queries that may cause expense spikes. “What distinguishes CloudZero from other vendors in the market is its ability to amalgamate all sources of cloud costs into one comprehensive platform,” stated Justin Moore, CEO of Innovius Capital. “This allows individual engineers to pinpoint exact cost drivers while enabling finance and operations teams to accurately forecast the company’s unit economics.” Sky-high ambitions CloudZero’s platform excels in providing complete visibility into every aspect of cloud expenditure through its AnyCost feature. It digests all types of cloud spend (IaaS, PaaS, SaaS ) in real time, normalizes it into a common data model and presents a coherent view to stakeholders. In addition, CloudZero’s CostFormation feature combines billing and telemetry data to allocate every cent of spend into taggable and untaggable categories, including shared resources, multi-tenant architecture, and Kubernetes , eliminating manual tagging or spreadsheet use. Perhaps the platform’s most distinguishing feature is its set of engineering engagement tools. CloudZero equips engineers with AI -powered anomaly detection, enabling them to optimize fixed and variable costs associated with their cloud consumption. As Mike Rosenberg, senior director of engineering at fintech bank Nubank, put it, “CloudZero has one of the most powerful cloud cost intelligence platforms on the market. As a fellow data-driven organization, CloudZero is a strong cultural fit for Nubank, and we’re glad to partner with the team.” But the forecast for fierce competition remains high With this new capital, CloudZero plans to expand its platform features, develop its enterprise functionality and ramp up its team to assist more customers in maximizing their cloud investments. Yet at the same time, it faces fiercer competition than ever from established rivals in the space, including Microsoft , which recently announced its new Fabric multicloud provider analytics platform, with an eye to taking on Google and Amazon. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,061
2,021
"What is a decentralized database? | VentureBeat"
"https://venturebeat.com/business/what-is-a-decentralized-database"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is a decentralized database? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A decentralized database splits the workload up among multiple machines and uses sophisticated algorithms to balance the incoming and outgoing requests for the best response time. This type of database is useful for those times when there is more data that needs to be stored in the database than can physically saved on one physical machine. The bits — like log files, data collected by tracking click-throughs in the application, and the data generated by internet of things devices — pile up and need to be stored somewhere. They are also frequently referred to as distributed databases. There are several good reasons for splitting up a database: Size: The largest commodity disk drives available at the time of this writing are 18 terabytes. Some data sets are larger than can be stored on a single drive. These data sets must be split up across multiple drives. Demand: If many users are trying to access the data at the same time, database performance suffers. Splitting the workload means that more machines can answer more requests, and users don’t notice any performance delays. Redundancy: Drives can fail. If the data is valuable, creating multiple copies and storing them across multiple machines protects against hardware failure. Geographic redundancy: Spreading out multiple copies in different locations reduces the threat of catastrophic fire , natural disaster, or power outage. Speed: Network latency is still a problem when the database and the user making queries are geographically far apart. Placing copies of the data in centers close to the user results in faster responses because the data doesn’t have to travel as far. Speed is especially important for projects that work with people in different continents. Computational load: Some data sets have to be distributed because the computational load required during analysis is too large for one machine to handle. A machine learning application, for instance, may distribute large data sets across multiple systems in order to spread out the analytical work, which can be quite substantial. Privacy: Some data sets are split up to maximize privacy and minimize the risks in case of a data breach. If different parts of the data are stored on different machines, even if one part is exposed in a breach, the rest of the data is still safe. Politics: When multiple groups use the same data set, there may be some challenges over governance. Having the data stored across multiple machines can be useful if some data is stored with one group and some other data is managed by another group. One approach to simplify the architecture is to split the dataset into smaller parts and assign the parts to certain machines. One computer might handle all people whose last name begins with A through F, another G through M, etc. This splitting, often called “sharding,” can inspire strategies that range from simple to complex. Distributing a database can be tricky The greatest challenge with splitting up the database is ensuring that the information remains consistent. For example, in the case of a hypothetical airline booking system, if one machine responds to a database query that an airplane seat has been sold, then another machine shouldn’t respond to a query by saying that the seat is open and available. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Some distributed databases enforce the rules on consistency carefully so that all queries receive the same answer, regardless of which node in the cluster responded to the query. Other distributed databases relax the consistency requirement in favor of “eventual consistency.” With eventual consistency, the machines can be out-of-sync with each other and return different answers, so long as the machines eventually catch up to each other and return the same results. In some narrow cases, one machine may not hear about the new version of the data stored on another machine for some time. Machines in the same datacenter tend to reach consistency faster than those separated by longer distances or slower networks. Database developers must choose between fast responses and consistent answers. Tight synchronization between the distributed versions will increase the amount of computation and slow the responses, but the answers will be more accurate. Allowing data to be out of sync will speed up performance, but at the expense of accuracy. Choosing whether to prioritize speed or accuracy is a business decision that can be an art. Banks, for instance, know their customers want correct accounting more than split-second responses. Social media companies, however, may choose speed because most posts are rarely edited and small differences in propagation aren’t essential. Legacy approaches to distributed systems The major database companies offer elaborate options for distributing data storage. Some support large machines with multiple processors, multiple disks, and large blocks of RAM. The machine is technically one computer, but the individual processors coordinate their responses in similar ways as if the processors were separated by continents. Many organizations run their Oracle and SAP deployments on Amazon Web Services in order to take advantage of the computing power. AWS’ u-24tb1.metal , for instance, may look like one machine on the invoice, but it has 448 processors inside, along with 24 terabytes of RAM. It is optimized for very large databases like SAP’s HANA , which stores the bulk of the information in RAM for fast response. All of the major databases have options for replicating the database to create distributed versions that are split between more distinct machines. Oracle’s database, for instance, has long supported a wide range of replication strategies across collections of machines that can even include non-Oracle databases. Lately, Oracle has been marketing a version with the name “autonomous” to signify that it’s able to scale and replicate itself automatically in response to loads. MariaDB, a fork of MySQL, also supports a variety of replication strategies that allow the data from one primary node to pass copies of all transactions to replicas that are commonly set up to be read-only. That is, the replica can answer queries for information, but it doesn’t store new data. In a recent presentation, Max Mether, one of the cofounders of MariaDB , says his company is working hard at adding autonomous abilities to its database. “The server should know how to tune itself better than you,” he explained. “That doesn’t mean you shouldn’t have the option to tune the server, but for many of these variables, it’s really hard as a user to figure out how to tune them optimally. Ideally you should just let the server choose, based on the current workload, what makes sense.” Upstarts handle distributed differently The rise of cloud services hides some of the complexity of distributing the databases, at least for configuring the server and arranging for the connection. DigitalOcean, for instance, offers managed versions of MySQL, PostgreSQL, and Redis. Clusters can be created with a certain size with a single control panel to offer storage and failover. Some providers have added the ability to spread out clusters in different datacenters around the world. Amazon’s RDS , for instance, can configure clusters that span multiple areas called “availability zones.” Online file storage is also starting to offer much of the same replication. While the services that offer to store blocks of data in buckets don’t provide the indexing or complex searching of databases, they do offer replication as part of the deal. Some approaches work to merge more complex calculations with distributed data sets. Tools like Hadoop and Spark, for instance, are just two of the popular open source constellations of tools that match distributed computation with distributed data. There are a number of companies that specialize in supporting versions that are installed in house or in cloud configurations. Databricks’ Delta Lake , for instance, is one product that supports complex data mining operations on distributed data. Groups that value privacy are also exploring complicated distributed operations like the Interplanetary File System , a project designed to spread web data out among multiple locations for speed and redundancy. What distributed databases can’t do Not all work requires the complexity of coordinating multiple machines. Some projects may be labeled “big data” by project managers who feel aspirational, even though the volume and computational load is easily handled by a single machine. If a fast response time is not essential and if the size is not too large and won’t grow in an unpredictable way, a simpler database with regular backups may be sufficient. This article is part of a series on enterprise database technology trends. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,062
2,023
"Using the blockchain to prevent data breaches | VentureBeat"
"https://venturebeat.com/security/using-the-blockchain-to-prevent-data-breaches"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Using the blockchain to prevent data breaches Share on Facebook Share on X Share on LinkedIn Digital work of User based blockchain futuristic technology backgrounds Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data breaches have, unfortunately, become an all-too-common reality. The Varonis 2021 Data Risk Report indicates that most corporations have poor cybersecurity practices and unprotected data, making them vulnerable to cyberattacks and data loss. With a single data breach costing a company an average of $3.86 million and eroding a brand’s reputation and its consumers’ trust, mitigating the risks is no longer a luxury. However, as cyberattacks get more pervasive and sophisticated, merely patching up traditional cybersecurity measures may not be enough to fend off future data breaches. Instead, it’s imperative to start seeking more advanced security solutions. As far as innovative solutions go, preventing data breaches by utilizing the blockchain may be our best hope. Blockchain technology 101 Blockchain technology, also referred to as distributed ledger technology (DLT), is the culmination of decades of research and advancement in cryptography and cybersecurity. The term “blockchain” was first popularized thanks to cryptocurrency , as it’s the technology behind record-keeping in the Bitcoin network. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This technology makes it extremely difficult to change or hack a system, as it allows for the data to be recorded and distributed but not copied. Since it provides a brand-new approach to storing data securely, it can be a promising solution for data breaches in any environment with high-security requirements. Built on the idea of P2P networks, a blockchain is a public, digital ledger of stored data shared across a whole network of computer systems. Each block holds several transactions, and whenever a new transaction happens, a record of that transaction gets added to every network participant’s ledger. Its robust encryption and decentralized and immutable nature could be the answer to preventing data breaches. Enhancing data security via encryption World Wide Web inventor Tim Berners-Lee has said recently that “ we’ve lost control of our personal data. ” Companies store enormous amounts of personally identifiable information (PII), including usernames, passwords, payment details, and even social security numbers, as the Domino’s data leak in India (amongst others) has made clear. While this data is almost always encrypted, it’s never as secure as it would be in a blockchain. By making use of the best aspects of cryptography, blockchain can finally put an end to data breaches. How can a shared ledger be more secure than standard encryption methods? To secure stored data, blockchain employs two different types of cryptographic algorithms: hash functions and asymmetric-key algorithms. This way, the data can only be shared with the member’s consent, and they can also specify how the recipient of their data can use the data and the window of time in which the recipient is allowed to do so. Hash functions When the first transaction of a chain occurs, the blockchain’s code gives it a unique hash value. As more transactions occur, their hash values are then hashed and encoded into a Merkle tree, thereby creating a block. Every block gets a unique hash with the hash of the previous block’s header and timestamp encoded. This creates a link between the two blocks, which, in turn, becomes the first link in the chain. Since this link is created using unique information from each block, the two are immutably bound. Asymmetric encryption Asymmetric encryption, also known as public-key cryptography, encrypts plain text using two keys: a private key that’s typically produced via a random number algorithm, and a public one. The public key is available freely and can be transferred over unsecured channels. On the other hand, the private key is kept a secret so that only the user can know it. Without it, it’s almost impossible to access the data. It functions as a digital signature, like real-world signatures. This way, blockchain gives individual consumers the ability to manage their own data and specify with whom to share it over cryptographically encoded networks. Decentralization A primary reason for the increase in data breaches is over-reliance on centralized servers. Once consumers and app users enter their personal data, it’s directly written into the company’s database, and the user doesn’t get much say in what happens to it afterward. Even if users attempt to limit the data the company can share with third parties, there will be loopholes to exploit. As the Facebook– Cambridge Analytica data-mining scandal showed, the results of such centralization can be catastrophic. Additionally, even assuming goodwill, the company’s servers could still get hacked by cybercriminals. In contrast, blockchains are decentralized, immutable records of data. This decentralization eliminates the need for one trusted, centralized authority to verify data integrity. Instead, it allows users to share data in a trustless environment. Each member has access to their own data, a system known as zero-knowledge storage. This also makes the network less likely to fall victim to hackers. Unless they bring down the whole network simultaneously, the undamaged nodes will quickly detect the intrusion. Since decentralization reduces points of weakness, blockchains also have a much lower chance of succumbing to an IP-based DDoS attack than centralized systems using client/server architectures. Immutability In addition to being decentralized, blockchains are also designed to be immutable, which increases data integrity. The blockchains’ immutability makes all the data stored therein almost impossible to alter. Because every individual in the network has access to a copy of the distributed ledger, any corruption that occurs in a member’s ledger will automatically cause it to be rejected by the rest of the network members. Therefore, any alteration or change in the block data will lead to inconsistency and break the blockchain, rendering it invalid. The bottom line Even though blockchain technology has been around since 2009, it has much untapped potential in the field of cybersecurity, especially when it comes to preventing data breaches. The top-notch cryptography employed by blockchain protocols guarantees the safety of all data stored in the ledger, making it a promising solution. Since nodes running the blockchain must always verify any transaction’s validity before it’s executed, cybercriminals are almost guaranteed to be stopped in their tracks before they gain access to any private data. Jenelle Fulton-Brown is a security architect and internet privacy advocate based in Toronto, Canada helping Fortune 500 companies build future-proof internal systems. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,063
2,023
"Roleverse empowers users to make their own games with generative AI | VentureBeat"
"https://venturebeat.com/games/roleverse-empowers-users-to-make-their-own-games-with-generative-ai"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Roleverse empowers users to make their own games with generative AI Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Roleverse is bringing the power of creation to all users by enabling them to create their own games using generative AI. The idea is to combine the ingenuity and creative passion of ordinary players with emerging technologies that enable them to create games more quickly and with better quality. But it’s not just about giving players access to ChatGPT and entering text prompts, hoping good games will pop out. Rather, the game asks players questions that help them narrow down their vision more quickly to something that looks cool. In response to prompts from Roleverse, you create a game by customizing something — which starts out more generic — until you’re satisfied with your own original work. “We have created a service or a platform where you can create a game by saying what kind of game you want to play,” said Jani Penttinen, CTO of Roleverse, in an interview with GamesBeat. “You say, ‘I want to go to a tropical island and it will create you an island. You can change it by saying you want to make it winter. It can create enemies. It can make changes to the landscape.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The result is a platform that enables people to create their own visions and become storytellers themselves. No coding skills are needed. You can do simple things that can change the word dramatically. For instance, if you say the vibe should be spooky, then it will make the world darker and scarier. The company uses AI to create emergent gameplay, or something that you didn’t expect when you started out. Rather than viewing AI as a threat, the company sees it as a tool to unlock more creativity, as the player as the freedom to alter the world. How it started Penttinen started making games in the early 1990s. Then he started working on machine learning in games about six or so years ago. That was a little early, but the tech has made big advances since then. During the past year, with the emergence of large language models that can actually accomplish things, Penttinen dove into the tech. He felt like generating a story for a game was the easy part. But building an entire world and a game around that is the hard part. “No one really expected them to be the solution that actually lets you have intelligence,” he said. “It used to be that we thought you can create customer service robots or something like that. But it turned out we could do so much more with it.” Penttinen sees each game as more like an island, like in Fortnite, where others can visit. In the long run, he hopes to have multiplayer so that friends can visit each other and streamers streaming their creations to others. The company has raised $1.64 million to date. Penttinen expects to raise more later this year. Aurelien Merville is the CEO and Markus Kiukkonen is COO. Overall, there are eight founders and one employee. How it works You can share the results as you wish. The company is opening up Roleverse for testing on its Discord community. That gives you access to playtest the game and start building things. The game has an AI backend, based on OpenAI’s GPT-4 AI model, that has full control of the world. It’s also connected to Google’s Bard model. “We basically have this proxy server where we call our own API, and our back end will connect to either OpenAI or Google Bard,” he said. So it can create anything. You can say that you want the world to be more dangerous, and the AI will figure it out, Penttinen said. The language model can create its own backstory. “It’s kind of like Minecraft without the editor,” he said. “You don’t have to know how to edit things. You don’t need to drag and drop or click with a mouse. You say what you want.” Roleverse is seeking playtesters now, and it expects the game will go live in October 2023. The company started more than a year ago, and it started building things in August 2022. Penttinen is in Austin, Texas, but everyone else is based in Helsinki, Finland. “We’re expecting to see a variety of different games in unexpected ways,” he said. “It’s hard to compare it to any other game because we just tried to give as much freedom as possible to the players to be able to create.” The company has designed templates for games, and its engineers have been teaching the AI how to design a great gaming session. It starts with a story, and then it creates the world around that story. The games can usually be played in about 15 minutes to an hour. It’s sort of like teaching a junior game designer how to design a great game, he said. The game world that gets created is about a square kilometer. “We want to give the AI the ability to come up with endless variations when you design a game,” he said. “If you really like it, you can share it with your friends.” While the company’s platform will be called Roleverse, it is also making its own game, and the name of that game is yet to be decided. It’s a Zelda-like, open-world game. “It’s like a sandbox platform where people will be able to experiment and create their own things, ” he said. “We are showcasing what can be done with the tech. But the ultimate goal is that we will be hosting a platform for all kinds of games.” As for making money, Penttinen said, “Initially it will be a similar model as Midjourney and ChatGPT, where you have some amount of free use and if you want to do more, you need to subscribe. This will be the sandbox. Later on, when our first real game launches, it will be free to play with in-app purchases.” Because there is no direct text input from the user, Roleverse aims to stop people from creating inappropriate games that might violate community standards. The company is keeping it family-friendly. Once you reach the point where you have enough built, then you can say a sentence that can modify the way it’s built. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,064
2,023
"Nvidia CEO Jensen Huang's view of generative AI's hyper growth | interview | VentureBeat"
"https://venturebeat.com/ai/jensen-huangs-confidence-in-generative-ai-fuels-rally"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia CEO Jensen Huang’s view of generative AI’s hyper growth | interview Share on Facebook Share on X Share on LinkedIn Nvidia CEO Jensen Huang Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Jensen Huang will be giving a keynote speech at the Computex 2023 event in Taiwan over the weekend, but he’s already riding high from the resurgence in AI demand thanks to the popularity of generative AI such as OpenAI’s ChatGPT. Huang reported earnings on Wednesday that beat Wall Street’s expectations for revenues as well as expectations for the second half of the year. That in turn fueled a broad tech rally on Wall Street. The AI and graphics chip company’s stock price rose 27% in the past couple of day from $305 a share to $389.25 today. With a market cap of $963 billion, Nvidia’s valuation is closing in on $1 trillion. Thanks to strength in chips for the data center, Nvidia reported revenues of $7.19 billion for the first fiscal quarter ended April 30, down 13% from a year ago but above expectations. Data center revenue in the first-fiscal quarter was a record $4.28 billion, up 14% from a year ago and up 18% from the previous quarter. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! I spoke briefly with Huang after the earnings announcement this week and he filled me in on his confidence in the AI surge as well as Nvidia’s ability to line up manufacturing to meet this demand. But he didn’t think there was a surge coming in the broader economy in the second half. And he noted that gaming, while down from a year ago, is back to quarterly growth from previous quarters. Here’s an edited transcript of our interview. GamesBeat: It looks like you guys have a cheerful report today. Jensen Huang: We always have cheerful reports! It’s keynote time. I’ll say something fun. GamesBeat: I wondered how the transition would work as we go from stepping on the gas, having shortages during the pandemic, stepping on the brakes, and now stepping on the gas again in the wake of ChatGPT’s launch and the growth of generative AI. How do you feel the pattern is going to be for meeting these higher expectations now? Huang: The good news is we were already flooring it on Hopper. We went into production in August of last year. Our timing was impeccable. Ampere, of course, is still in high demand, and it’s already in volume. The first part is our supply chain is very large, and our supply chain flow is already very high. On top of that, we still have quite a big step up in demand. We’ve procured substantially more supply for the second half of this year. We’ve responded very quickly and placed very substantial orders. We’ve procured a bunch of supply incoming. GamesBeat: Do you see something like an economic revival accompanying this? Does the second half of the year look good for that reason as well? Or is this really just an AI phenomenon? Huang: I think it’s an AI phenomenon. The reason for that is because the rest of the data center is still down. Data center, enterprise computing, as you know, is muted. But generative AI is doing incredibly. I think it’s very focused on generative AI. For the first time, people can see how they’re going to make money with generative AI. All these APIs can be connected to all these services and applications. It’s a lot easier to invest when you can see that return on investment. That’s number one. Number two, people finally realized, when everything clicked together, that accelerated computing saves them money and saves them power. GamesBeat: Why are large language models so expensive to train? Huang: Well, they’re not. They’re actually not. GamesBeat: The impression I got from ChatGPT is that it was very expensive. Huang: It doesn’t cost that much at all. The reason for that is–it just depends on where you started. If you’re a software engineer, you used to be able to write an application just by buying a Macbook. Now, all of a sudden, the fact that you need a supercomputer to help you develop the model seems like a lot. But let’s take it into perspective. If you were building a chip company and you were taping out a chip, the tapeout of a chip is around $100 million, just the tapeout. Not to mention the tools, which are probably another $100 million, and not to mention all the engineers, all the systems you’re bringing up, things like that. In order to build one of our chips, it’s a few billion dollars. And we’re just one chip company. There’s a whole bunch of chip companies. When they tape out a chip it’s no less than $25 million. Writing, developing a large language model–taping out a chip these days, what the software industry is learning is that building these large language models is kind of like taping out a chip. GamesBeat: Is there much worry that we could still see shortages of some kind because of this rapid change in demand? Or are you not worried so much about that? Huang: I think the shortages for the services are quite severe at the moment. But I think it’s going to improve tremendously in just a few months, as all the systems are delivering in real time. This is going to get better in real time. GamesBeat: Gaming seems softer. Is that for any particular reason? Huang: No, I thought gaming was terrific. We’re seeing sequential growth. GamesBeat: I guess it’s still down from a year ago, though. Huang: Yes, but that’s because of all the things from a year ago. The yearly comparisons are tough. But the quarterly results are good. The channel inventory correction is now behind us. We’re ramping Ada now across the board. I’m quite excited about that. There’s a new application for creatives in town as well. It’s called generative AI. GeForce is for gamers and creatives. Now you have generative AI to help you create things. That’s the buzz. People are very excited about that. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,065
2,023
"Speech AI, supercomputing in the cloud, and GPUs for LLMs and generative AI among Nvidia’s next big moves | VentureBeat"
"https://venturebeat.com/ai/speech-ai-supercomputing-cloud-gpus-llms-generative-ai-nvidia-next-big-moves"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Speech AI, supercomputing in the cloud, and GPUs for LLMs and generative AI among Nvidia’s next big moves Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At its GTC 2023 conference, Nvidia revealed its plans for speech AI, with large language model (LLM) development playing a key role. Continuing to grow its software prowess, the hardware giant has announced a suite of tools to aid developers and organizations working toward advanced natural language processing (NLP). In this regard, the company unveiled NeMo and DGX Cloud on the software side, and Hopper GPU on the hardware one. NeMo, part of the Nvidia AI Foundations cloud services , creates AI-driven language and speech models. DGX Cloud is an infrastructure platform specially designed for delivering premium services over the cloud and running custom AI models. In Nvidia’s new lineup of AI hardware, the much awaited Hopper GPU is now available and poised to enhance real-time LLM inference. >>Follow VentureBeat’s ongoing Nvidia GTC spring 2023 coverage<< Dialing up LLM workloads in the cloud Nvidia’s DGX Cloud is an AI supercomputing service that gives enterprises immediate access to the infrastructure and software needed to train advanced models for LLMs, generative AI and other groundbreaking applications. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! DGX Cloud provides dedicated clusters of DGX AI supercomputing paired with Nvidia’s proprietary AI software. This service in effect allows every enterprise to access its own AI supercomputer through a simple web browser, eliminating the complexity associated with acquiring, deploying and managing on-premises infrastructure. Moreover, the service includes support from Nvidia experts throughout the AI development pipeline. Customers can work directly with Nvidia engineers to optimize their models and resolve development challenges across a broad range of industry use cases. “We are at the iPhone moment of AI, “said Jensen Huang, founder and CEO of Nvidia. “Startups are racing to build disruptive products and business models, and incumbents are looking to respond. DGX Cloud gives customers instant access to Nvidia AI supercomputing in global-scale clouds.” ServiceNow uses DGX cloud with on-premises Nvidia DGX supercomputers for flexible, scalable hybrid-cloud AI supercomputing that helps power its AI research on large language models, code generation and causal analysis. ServiceNow also co-stewards the BigCode project , a responsible open-science LLM initiative, which is trained on the Megatron-LM framework from Nvidia. “BigCode was implemented using multi-query attention in our Nvidia Megatron-LM clone running on a single A100 GPU,” Jeremy Barnes, vice president of product platform, AI at ServiceNow, told VentureBeat. “This resulted in inference latency being halved and throughput increased 3.8 times, illustrating the kind of workloads possible at the cutting edge of LLMs and generative AI on Nvidia.” Barnes said that ServiceNow aims to improve user experience and automation outcomes for customers. “The technologies are developed in our fundamental and applied AI research groups, who are focused on the responsible development of foundation models for enterprise AI,” Barnes added. The DGX cloud instances start at $36,999 per instance per month. Streamlining speech AI development The Nvidia NeMo service is designed to assist enterprises in combining LLMs with their proprietary data to improve chatbots, customer service and other applications. As part of the newly launched Nvidia AI Foundations family of cloud services, the Nvidia NeMo service enables businesses to close the gap by augmenting their LLMs with proprietary data. This allows them to frequently update a model’s knowledge base through reinforcement learning without starting from scratch. “Our current emphasis is on customization for LLM models,” said Manuvir Das, vice president of enterprise computing at Nvidia, during a GTC prebriefing. “Using our services, enterprises can either build language models from scratch or utilize our sample architectures.” This new functionality in the NeMo service empowers large language models to retrieve accurate information from proprietary data sources and generate conversational, humanlike responses to user queries. NeMo aims to help enterprises keep pace with a constantly changing landscape, unlocking capabilities such as highly accurate AI chatbots, enterprise search engines and market intelligence tools. With NeMo, enterprises can build models for NLP, real-time automated speech recognition (ASR) and text-to-speech (TTS) applications such as video call transcriptions, intelligent video assistants and automated call center support. NeMo can assist enterprises in building models that can learn from and adapt to an evolving knowledge base independent of the dataset that the model was initially trained on. Instead of requiring an LLM to be retrained to account for new information, NeMo can tap into enterprise data sources for up-to-date details. This capability allows enterprises to personalize large language models with regularly updated, domain-specific knowledge for their applications. It also includes the ability to cite sources for the language model’s responses, enhancing user trust in the output. Developers using NeMo can also set up guardrails to define the AI’s area of expertise, providing better control over the generated responses. Nvidia said that Quantiphi , a digital engineering solutions and platforms company, is working with NeMo to build a modular generative AI solution to help enterprises create customized LLMs to improve worker productivity. Its teams are also developing tools that enable users to search for up-to-date information across unstructured text, images and tables in seconds. LLM architectures on steroids? Nvidia also announced four inference GPUs, optimized for a diverse range of emerging LLM and generative AI applications. These GPUs are aimed at assisting developers in creating specialized AI-powered applications that can provide new services and insights quickly. Furthermore, each GPU is designed to be optimized for specific AI inference workloads while also featuring specialized software. Out of the four GPUs unveiled at the GTC, the Nvidia H100 NVL is exclusively tailored for LLM deployment, making it an apt choice for deploying massive LLMs, such as ChatGPT, at scale. The H100 NVL boasts 94GB of memory with transformer engine acceleration, and offers up to 12 times faster inference performance at GPT-3 compared to the previous generation A100 at the data center scale. Moreover, the GPU’s software layer includes the Nvidia AI Enterprise software suite. The suite encompasses Nvidia TensorRT , a high-performance deep-learning inference software development kit, and Nvidia Triton inference server, an open-source inference-serving software that standardizes model deployment. The H100 NVL GPU will launch in the second half of this year. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,066
2,023
"Senate letter to Meta on LLaMA leak is a threat to open-source AI, say experts | VentureBeat"
"https://venturebeat.com/ai/senate-letter-to-meta-on-llama-leak-is-a-threat-to-open-source-ai-at-a-key-moment-say-experts"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Senate letter to Meta on LLaMA leak is a threat to open-source AI, say experts Share on Facebook Share on X Share on LinkedIn Image by Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A letter sent by two U.S. senators to Meta CEO Mark Zuckerberg on Tuesday, which questioned the leak in March of Meta’s popular open-source large language model LLaMA , sends a threat to the open-source AI community, say experts. It is notable because it comes at a key moment when Congress has prioritized regulating artificial intelligence, while open-source AI is seeing a wave of new LLMs. For example, three weeks ago, OpenAI CEO Sam Altman testified before the Senate Subcommittee on Privacy, Technology & the Law — Senator Richard Blumenthal (D-CT) is the chair and Senator Josh Hawley (R-MO) its ranking member — and agreed with calls for a new AI regulatory agency. The letter to Zuckerberg (who declined to comment but reaffirmed Meta’s “commitment to an open science-based approach to AI research” in a company all-hands meeting today) was sent by Blumenthal and Hawley on behalf of the same subcommittee. The senators said they are concerned about LLaMA’s “potential for its misuse in spam, fraud, malware, privacy violations, harassment, and other wrongdoing and harms.” The letter pointed to LLaMA ’s release In February, saying that Meta released LLaMA for download by approved researchers, “rather than centralizing and restricting access to the underlying data, software, and model.” It added that Meta’s “choice to distribute LLaMA in such an unrestrained and permissive manner raises important and complicated questions about when and how it is appropriate to openly release sophisticated AI models.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Concerns about attempts throw open-source AI ‘under the bus’ Several experts said they were “not interested” in conspiracy theories, but had concerns about machinations behind the scenes. “Look, it’s easy for both government officials and proprietary competitors to throw open source under the bus, because policymakers look at it nervously as something that’s harder to control — and proprietary software providers look at it as a form of competition that they would rather just see go away in some cases,” Adam Thierer, innovation policy analyst at R Street Institute, told VentureBeat in an interview. “So that makes it an easy target.” William Falcon, CEO of Lightning AI and creator of the open-source PyTorch Lightning, was even clearer, saying that the letter was “super surprising,” and while he didn’t want to “feed conspiracy theories,” it “almost feels like OpenAI and Congress are working together now.” And Steven Weber, a professor at the School of Information and the department of political science at the University of California, Berkeley, went even further, telling VentureBeat that he thinks Microsoft, operating through OpenAI, is “running scared, in the same way that Microsoft ran scared of Linux in the late 1990s and referred to open-source software as a ‘ cancer ‘ on the intellectual property system.” Steve Ballmer, he recalled, “called on his people … to convince people that open source was evil, when in fact what it was was a competitive threat to Windows.” Releasing LLaMA was ‘not an unacceptable risk’ Christopher Manning, director of the Stanford AI Lab, told VentureBeat in a message that while there is not currently legislation or “strong community norms about acceptable practice” when it comes to AI, he “strongly encouraged” the government and AI community to work to develop regulations and norms applicable to all companies, communities and individuals developing or using large AI models. Nevertheless, he said, “In this instance, I am happy to support the open-source release of models like the LLaMA models.” While he does “fully acknowledge” that models like LLaMA can be used for bad purposes, such as disinformation or spam, he said they are smaller and less capable than the largest models built by OpenAI, Anthropic and Google (roughly 175 billion to 512 billion parameters). Conversely, he said that while LLaMA’s models are larger and of better quality than models released by open-source collectives, they are not dramatically bigger (the largest LLaMA model is 60 billion parameters; the GPT-Neo-X model released by the distributed collective of EleutherAE contributors is 20 billion parameters). “As such, I do not consider their release an unacceptable risk,” he said. “We should be cautious about keeping good technology from innovative companies and students trying to learn about and build the future. Often it is better to regulate uses of technology rather than the availability of the technology.” A ‘misguided’ attempt to limit access Vipul Ved Prakash, co-founder and CEO of Together, which runs the RedPajama open-source project which replicated the LLaMA dataset to build open-source, state-of-the-art LLMs, said that the Senate’s letter to Meta is a “misguided attempt at limiting access to a new technology.” The letter, he pointed out, is “full of typical straw-man concerns.” For instance, he said, “it makes no sense to use a language model to generate spam. I helped create what is possibly the most widely deployed anti-spam system on the Internet today, and I can say with confidence that spammers won’t be using LLaMA or other LLMs because there are significantly cheaper ways of creating spam messages.” Many of these concerns, he went on, are “applicable to programming languages that allow you to develop novel programs, and some of these programs are written with malicious intent. But we don’t limit sophisticated programming languages as a society, because we value capability and functionality they bring into our lives.” In general, he said the discourse around AI safety is a “panicked response with little to zero supporting evidence of societal harms.” Prakash said he worries about it leading to the “squelching of innovation in America and handing over the keys to the most important technology of our generation to a few companies, who have proactively shaped the debate.” Why is Meta a target? One question is why Meta’s models are being singled out (beyond the fact that Meta has had run-ins with Congress for decades). After all, both Manning and Falcon pointed out that a new model by the UAE government-backed Technology Innovation Institute made available an even better quality 40 billion- parameter model, Falcon. “So it wouldn’t have made much difference to the rate of progress or LLM dissemination whether or not LLaMA was released,” said Manning, while Falcon questioned what the U.S. government could do about its release: “What are they going to do? Tell the UAE they can’t make the model public?” Thierer claimed that this is where the “politics of intimidation” come in. The Blumenthal/Hawley letter, he explained, is “a threat made to open source through what I’ll call a ‘nasty gram’ — a nasty letter saying ‘you should reconsider your position on this.’ They’re not saying we’re going to regulate you, but there’s certainly an ‘or else’ statement hanging in the room that looms above a letter like that.” That, he says, is what’s most troubling. “At some point, lawmakers will start to put more and more pressure on other providers or platforms who may do business with or provide a platform for open-source applications or models,” he said. “And that’s how you get to regulating open source without formally regulating open source.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "