id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
14,967
2,022
"ESG Book arms investors with AI-powered insights on sustainability | VentureBeat"
"https://venturebeat.com/2022/06/22/esg-book-arms-investors-with-ai-powered-insights-on-sustainability"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ESG Book arms investors with AI-powered insights on sustainability Share on Facebook Share on X Share on LinkedIn Tree growing on stacks of coins on nature background. Saving, accounting and financial concept. ESG Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The U.S., unlike other robust nations, is only now beginning to organize the jigsaw-like pieces needed to structure and standardize sustainability reporting requirements, which will be needed to meet newly proposed U.S. Securities and Exchange Commission (SEC) rules. “When I think about this topic, I’m reminded of walking down the aisle of a grocery store and seeing a product like fat-free milk,” Gary Gensler, SEC chair, said in a statement , “What does ‘fat-free’ mean? Well, in that case, you can see objective figures, like grams of fat, which are detailed on the nutrition label … When it comes to ESG investing, though, there’s currently a huge range of what asset managers might disclose or mean by their claims.” In March, the SEC detailed proposed rules that would require companies — both foreign and domestic that are registered with the SEC — to report climate impact and emissions information. The proposal was enhanced late last month with amendments “to promote consistent, comparable and reliable information for investors concerning funds’ and advisers’ incorporation of environmental, social and governance (ESG) factors.” These specific reporting requirements have, until now, been largely optional in the U.S., but pressure is mounting for companies and investors to understand, communicate and act on sustainability data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But some companies aren’t waiting for an official policy to sort out the specifics. ESG Book , a company founded in 2018, built a cloud-based platform backed by AI-powered analysis that provides its customer base of financial institutions — like Bridgewater, J.P. Morgan, Citi and Robinhood — with sustainability and ESG data of more than 25,000 public companies. “I think we all recognize that there are a number of big pressures around us that the world has to respond to — climate change probably being the most urgent … and financial markets are normally one of the most powerful ways to address and mobilize change,” Daniel Klier, CEO of ESG Book told VentureBeat. “That’s why business as usual is no longer an option.” When business can’t be ‘as usual,’ change it ESG Book is fresh off a newly announced $35 million series B round led by Energy Impact Partners along with Meridiam and Allianz X. With the funding, the company aims to scale its platform while ensuring that its technology continues to offer its financial institution customers ESG data that is, “accessible, consistent and transparent, enabling financial markets to allocate capital towards more sustainable and higher impact assets,” according to the company’s press release. Though there are several tools on the market for enterprises to measure their sustainability performance, ESG Books hopes to bridge the gap between the ESG data companies are collecting and placing in various places like press releases, online, annual reports, etc., to share it in an accessible way for investors to equip them with what has historically been disjointed and hard to compile. “You see the SEC and Federal Reserve integrating climate risk into their practices, but at that moment, do you really see fundamental shifts in how investment patterns work?” Klier asked. “We think a lot of other companies are helping individual firms in their carbon calculation, but then the communications with stakeholders — one being the financial services market — at the moment, is not working. That’s the problem we’re trying to fix.” What’s under the hood to bridge the gap ESG Book’s platform gathers 450 data points on companies that they either report on themselves while using AI to analyze 30,000 news sources for what the world is reporting about that company. ESG Book performs analysis in two ways. A fund manager or investor can have the data fed into their own systems through an API or they upload entire investment portfolios via a .csv file to ESG Book’s platform and the ESG Book will perform all the analytics on its platform. “We collect the data with a combination of human analysts and artificial intelligence,” Klier said. “Then we invite every company to verify the data on the platform. So, companies can see what we have on them and, I think, [this is] quite a powerful way of creating a dataset but also giving companies the ownership of what kind of data we have.” The company’s algorithm and dataset of public company information allows investors to sign on, type in the name of a public company such as Alphabet Inc., for instance. From there it displays a UN Global Compact score , which is an assessment on anti-corruption risk, environmental risk, human rights risk, labor risk etc. Investors can drill into underlying raw data that ESG Book has gathered on a company to better understand things like a company’s scope 1, 2 and 3 emissions. It can also show the distribution of a portfolio or company’s performance and allows investors and fund managers to compare the data to see how that performance stacks up against industry or competitor averages. “ESG Book is a platform with the potential to transform the way ESG data is processed by the financial world. We believe it will substantially increase the quality and availability of ESG information to direct financing flows in accordance with sustainable development goals and the Paris Agreement,” said Thierry Deau, founder and CEO of Meridiam, one of the leading investors in the series B round. Sustainability can’t remain a ‘buzzword’ in the C-suite. As Klier notes, sustainability is still a buzzword for many CFOs. And though ESG Book’s platform will equip investors with real-time, granular sustainability data, it’s equally of use to CFOs to help them understand how to communicate ESG data and make lasting improvements for their companies. “I think it’s a critical moment for sustainability. Because, in my view, we spent the last five years talking a lot about it and making it a big topic,” Klier said. “Now, the expectations of all stakeholders are huge. I think it’s a critical time to redirect investments and redirect flows.” As for what’s next for ESG Books, Klier says the series B injection will allow the company to prioritize its next focus: to work with investors to create greater transparency in private markets. ESG Book’s growth is part of the larger ESG data and services market, which is projected to rocket to $5 billion by 2025. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,968
2,022
"Nvidia's Omniverse virtual simulations move to the cloud | VentureBeat"
"https://venturebeat.com/2022/03/22/nvidias-omniverse-virtual-simulations-move-to-the-cloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia’s Omniverse virtual simulations move to the cloud Share on Facebook Share on X Share on LinkedIn An image from the Nvidia Omniverse Foundational Tech Montage showcasing a virtual ramen shop. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Follow along with VentureBeat’s ongoing coverage from Nvidia’s GTC 2022 event. >> Nvidia CEO Jensen Huang said today that the Omniverse virtual simulation and its tools will be available in the cloud so that developers can use it on just about any computer. Huang made the announcement during the virtual Nvidia GTC online event today. And in an interview with VentureBeat, Omniverse platform vice president Richard Kerris said that the Omniverse ecosystem has expanded 10 times in terms of the companies and creators participating in it. Kerris said that more than 150,000 individuals have downloaded Nvidia Omniverse as a tool to design real-time 3D simulated worlds. Those simulations are being used in everything from games to industrial “digital twins,” where designers test a concept in a virtual design before committing to physical designs. BMW made a digital twin of a car factory before building it in the real world. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The Omniverse is Nvidia’s leading tool for building the metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. Kerris said that the metaverse is the network for the next generation of the web. “And we’re focused on the business and industrial side of virtual worlds, which we’re seeing tremendous feedback already from our customers, and use cases that are applicable today,” he said. “We’re focused on those things like visualization, simulation, digital twins, and collaboration. But there are other people using the platform to create content for virtual worlds in other areas, whether it’s entertainment and games.” The power of digital twins The big deal about the digital twins is the feedback loop, Kerris said. The instruments and sensors in the real factory can collect data and feed it back into the Omniverse virtual factory simulation, which can then become more accurate. “More than ever we believe that virtual worlds are required for the next era of AI,” Kerris said. “So whether it’s training robots in reusing synthetic data generation to autonomous driving, or digital twins of factories, cities, and even the grand project around Earth 2, Omniverse is a tool that goes from the creation to the operation of these virtual worlds.” As for new examples, Kerris said that Amazon is showing off some remarkable robotics, and Pepsico is showing off its warehouse management. There are also updates for Toy Jensen, the avatar of Huang that shows off ways to put human characters in the Omniverse. “The idea of digital twins is for us such a big part of the next generation of the Industrial Revolution, and it holds true for things like products to factories to cities to the entire Earth,” Kerris said. Omniverse Cloud Kerris said that many developers have offered feedback that they wanted to use the cloud to enable them to use Omniverse on a wide variety of hardware, such as low-end laptops, smartphones, or normal desktops rather than high-end workstations. Nvidia used the same tech it uses to provide its GeForce Now online gaming service to make Omniverse available in the cloud. Kerris said that makes Omniverse even more accessible to creators, developers, designers, engineers, and researchers worldwide. “By having all of Omniverse in the cloud, it becomes available to anybody, no matter what kind of platform you’re on, whether you’re on a Chromebook, a Mac or tablet,” Kerris said. “You’ll be able to tap into and using GeForce be able to stream Omniverse right from GeForce Now.” The first part of the Omniverse journey is design, collaboration, and creating content, whether it’s for use in the worlds or whether it’s creating things like the world, the factory robots, cars, and more, Kerris said. “And then the second part of that journey comes to the digital twin. Once the first part is complete, the next stage of its life begins. So whether you’re building a building, once you’re done with the building, then the digital twin life begins monitoring all the things that are happening in the building, using sensors, and the holds true in so many different other areas like robots in factories, retail, digital humans, things like that. Our customers give us that feedback.” Kerris said there has been strong demand for Omniverse from customers that are using non-RTX systems, whether they’re Mac customers or others. “They want to get their hands on Omniverse, and for the first time they will be able to do that,” Kerris said. “This will be in a beta and early access for a while, but we wanted to reveal this because it is going to be such as gamer changer.” Enterprise growth Nvidia Omniverse Enterprise is helping top companies enhance their pipelines and creative workflows. New Omniverse Enterprise customers include Amazon, DB Netze, DNEG, Kroger, Lowe’s, and more. There are more than 700 enterprise companies using Omniverse. Siemens is using Nvidia’s Omniverse and Modulus to create digital twins for its wind farms. Siemens will simulate its wind farms with physics-informed machine learning and run them 4,000 times faster with the latest Nvidia hardware. Virtual representations of Siemens Gamesa’s wind farms will be built using Omniverse and Modulus, which together comprise Nvidia’s digital twin platform for scientific computing. The platform will help Siemens Gamesa achieve quicker calculations to optimize wind farm layouts, which is expected to lead to farms capable of producing up to 20 percent more power than previous designs. Kerris said the excitement around the Omniverse is helping to push the pro workstation business to new heights. GTC event Nvidia GTC 2022 takes place virtually from March 21 to March 24, featuring 900 sessions and 1,600 speakers on a variety of technology topics including deep learning, Omniverse, data science, robotics, networking, and graphics. Leaders from hundreds of organizations will present, including Amazon, Autodesk, Bloomberg, Cisco, DeepMind, Epic Games, Flipkart, Google Brain, Lockheed Martin, Mercedes-Benz, Microsoft, NASA, NFL, Pfizer, Snap, Sony, Stanford University, U.S. Air Force, U.S. Congress, Visa, VMware, Walt Disney, and Zoom. 10 times Kerris said the ecosystem had grown by more than ten times since the fall, with a lot of growth in virtual simulations of digital twins, robotics, designs, and content creation software. Systems integrators and rendering companies are now supporting the Omniverse platform. Adobe has just recently announced an update to their own connections to Omniverse of Substance 3D Painter and the materials library,” he said. The ten-fold number refers to connections that have been built to the Omniverse, and the Adobe example is just one such connection. “With sensor models, we have uncovered hundreds of opportunities for connections into the platform, whether it’s cameras, microphones, sensors, LiDAR, all kinds of things,” Kerris said. “And we’re starting to see that that area grow tremendously as well. And now with our asset and material libraries, we have hundreds of thousands of assets available in Omniverse, right out of the gate.” Nvidia has been releasing many updates for Omniverse, such as its interactive viewer, annotation, markup, and presentation tools. Companies such as Deloitte are building teams to create add-ons and customizations for Omniverse and its enterprise marketplace. “We’ve been doing other things to help democratize the complexity of programming,” Kerris said. The Omniverse XR version will be available as a beta in early April. Kerris said the size of his own team has doubled, and that’s an indicator of Nvidia’s growing investment in the Omniverse. “We’re seeing major customers making purchases and we have an incredible pipeline of things coming,” Kerris said. “Jensen’s view is it is time to double down even more.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,969
2,021
"The dos and don’ts of machine learning research | VentureBeat"
"https://venturebeat.com/2021/08/23/the-dos-and-donts-of-machine-learning-research"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The dos and don’ts of machine learning research Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Machine learning is becoming an important tool in many industries and fields of science. But ML research and product development present several challenges that, if not addressed, can steer your project in the wrong direction. In a paper recently published on the arXiv preprint server, Michael Lones, Associate Professor in the School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, provides a list of dos and don’ts for machine learning research. The paper, which Lones describes as “lessons that were learnt whilst doing ML research in academia, and whilst supervising students doing ML research,” covers the challenges of different stages of the machine learning research lifecycle. Although aimed at academic researchers, the paper’s guidelines are also useful for developers who are creating machine learning models for real-world applications. Here are my takeaways from the paper, though I recommend anyone involved in machine learning research and development to read it in full. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Pay extra attention to data Machine learning models live and thrive on data. Accordingly, across the paper, Lones reiterates the importance of paying extra attention to data across all stages of the machine learning lifecycle. You must be careful of how you gather and prepare your data and how you use it to train and test your machine learning models. No amount of computation power and advanced technology can help you if your data doesn’t come from a reliable source and hasn’t been gathered in a reliable manner. And you should also use your own due diligence to check the provenance and quality of your data. “Do not assume that, because a data set has been used by a number of papers, it is of good quality,” Lones writes. Your dataset might have various problems that can lead to your model learning the wrong thing. For example, if you’re working on a classification problem and your dataset contains too many examples of one class and too few of another, then the trained machine learning model might end up learning to predict every input as belonging to the stronger class. In this case, your dataset suffers from “class imbalance.” While class imbalance can be spotted quickly with data exploration practices, finding other problems needs extra care and experience. For example, if all the pictures in your dataset were taken in daylight, then your machine learning model will perform poorly on dark photos. A more subtle example is the equipment used to capture the data. For instance, if you’ve taken all your training photos with the same camera, your model might end up learning to detect the unique visual footprint of your camera and will perform poorly on images taken with other equipment. Machine learning datasets can have all kinds of such biases. The quantity of data is also an important issue. Make sure your data is available in enough abundance. “If the signal is strong, then you can get away with less data; if it’s weak, then you need more data,” Lones writes. In some fields, the lack of data can be compensated for with techniques such as cross-validation and data augmentation. But in general, you should know that the more complex your machine learning model, the more training data you’ll need. For example, a few hundred training examples might be enough to train a simple regression model with a few parameters. But if you want to develop a deep neural network with millions of parameters, you’ll need much more training data. Another important point Lones makes in the paper is the need to have a strong separation between training and test data. Machine learning engineers usually put aside part of their data to test the trained model. But sometimes, the test data leaks into the training process, which can lead to machine learning models that don’t generalize to data gathered from the real world. “Don’t allow test data to leak into the training process,” he warns. “The best thing you can do to prevent these issues is to partition off a subset of your data right at the start of your project, and only use this independent test set once to measure the generality of a single model at the end of the project.” In more complicated scenarios, you’ll need a “validation set,” a second test set that puts the machine learning model into a final evaluation process. For example, if you’re doing cross-validation or ensemble learning , the original test might not provide a precise evaluation of your models. In this case, a validation set can be useful. “If you have enough data, it’s better to keep some aside and only use it once to provide an unbiased estimate of the final selected model instance,” Lones writes. Know your models (as well as those of others) Today, deep learning is all the rage. But not every problem needs deep learning. In fact, not every problem even needs machine learning. Sometimes, simple pattern-matching and rules will perform on par with the most complex machine learning models at a fraction of the data and computation costs. But when it comes to problems that are specific to machine learning models, you should always have a roster of candidate algorithms to evaluate. “Generally speaking, there’s no such thing as a single best ML model,” Lones writes. “In fact, there’s a proof of this, in the form of the No Free Lunch theorem, which shows that no ML approach is any better than any other when considered over every possible problem.” The first thing you should check is whether your model matches your problem type. For example, based on whether your intended output is categorical or continuous, you’ll need to choose the right machine learning algorithm along with the right structure. Data types (e.g., tabular data, images, unstructured text, etc.) can also be a defining factor in the class of model you use. One important point Lones makes in his paper is the need to avoid excessive complexity. For example, if you’re problem can be solved with a simple decision tree or regression model, there’s no point in using deep learning. Lones also warns against trying to reinvent the wheel. With machine learning being one of the hottest areas of research, there’s always a solid chance that someone else has solved a problem that is similar to yours. In such cases, the wise thing to do would be to examine their work. This can save you a lot of time because other researchers have already faced and solved challenges that you will likely meet down the road. “To ignore previous studies is to potentially miss out on valuable information,” Lones writes. Examining papers and work by other researchers might also provide you with machine learning models that you can use and repurpose for your own problem. In fact, machine learning researchers often use each other’s models to save time and computational resources and start with a baseline trusted by the ML community. “It’s important to avoid ‘not invented here syndrome,’ i.e., only using models that have been invented at your own institution, since this may cause you to omit the best model for a particular problem,” Lones warns. Know the final goal and its requirements Having a solid idea of what your machine learning model will be used for can greatly impact its development. If you’re doing machine learning purely for academic purposes and to push the boundaries of science, then there might be no limits to the type of data or machine learning algorithms you can use. But not all academic work will remain confined in research labs. “[For] many academic studies, the eventual goal is to produce an ML model that can be deployed in a real world situation. If this is the case, then it’s worth thinking early on about how it is going to be deployed,” Lones writes. For example, if your model will be used in an application that runs on user devices and not on large server clusters, then you can’t use large neural networks that require large amounts of memory and storage space. You must design machine learning models that can work in resource-constrained environments. Another problem you might face is the need for explainability. In some domains, such as finance and healthcare, application developers are legally required to provide explanations of algorithmic decisions in case a user demands it. In such cases, using a black-box model might be impossible. For example, even though a deep neural network might give you a performance advantage, its lack of interpretability might make it useless. Instead, a more transparent model such as a decision tree might be a better choice even if it results in a performance hit. Alternatively, if deep learning is an absolute requirement for your application, then you’ll need to investigate techniques that can provide reliable interpretations of activations in the neural network. As a machine learning engineer, you might not have precise knowledge of the requirements of your model. Therefore, it is important to talk to domain experts because they can help to steer you in the right direction and determine whether you’re solving a relevant problem or not. “Failing to consider the opinion of domain experts can lead to projects which don’t solve useful problems, or which solve useful problems in inappropriate ways,” Lones writes. For example, if you create a neural network that flags fraudulent banking transactions with very high accuracy but provides no explanation of its decision, then financial institutions won’t be able to use it. Know what to measure and report There are various ways to measure the performance of machine learning models, but not all of them are relevant to the problem you’re solving. For example, many ML engineers use the “accuracy test” to rate their models. The accuracy test measures the percent of correct predictions the model makes. This number can be misleading in some cases. For example, consider a dataset of x-ray scans used to train a machine learning model for cancer detection. Your data is imbalanced, with 90 percent of the training examples flagged as benign and a very small number classified as malign. If your trained model scores 90 on the accuracy test, it might have just learned to label everything as benign. If used in a real-world application, this model can lead to missed cases with disastrous outcomes. In such a case, the ML team must use tests that are insensitive to class imbalance or use a confusion matrix to check other metrics. More recent techniques can provide a detailed measure of a model’s performance in various areas. Based on the application, the ML developers might also want to measure several metrics. To return to the cancer detection example, in such a model, it might be important to reduce false negatives as much as possible even if it comes at the cost of lower accuracy or a slight increase in false positives. It is better to send a few people healthy people for diagnosis to the hospital than to miss critical cancer patients. In his paper, Lones warns that when comparing several machine learning models for a problem, don’t assume that bigger numbers do not necessarily mean better models. For example, performance differences might be due to your model being trained and tested on different partitions of your dataset or on entirely different datasets. “To really be sure of a fair comparison between two approaches, you should freshly implement all the models you’re comparing, optimize each one to the same degree, carry out multiple evaluations … and then use statistical tests … to determine whether the differences in performance are significant,” Lones writes. Lones also warns not to overestimate the capabilities of your models in your reports. “A common mistake is to make general statements that are not supported by the data used to train and evaluate models,” he writes. Therefore, any report of your model’s performance must also include the kind of data it was trained and tested on. Validating your model on multiple datasets can provide a more realistic picture of its capabilities, but you should still be wary of the kind of data errors we discussed earlier. Transparency can also contribute greatly to other ML research. If you fully describe the architecture of your models as well as the training and validation process, other researchers that read your findings can use them in future work or even help point out potential flaws in your methodology. Finally, aim for reproducibility. if you publish your source code and model implementations, you can provide the machine learning community with great tools in future work. Applied machine learning Interestingly, almost everything Lones wrote in his paper is also applicable to applied machine learning , the branch of ML that is concerned with integrating models into real products. However, I would like to add a few points that go beyond academic research and are important in real-world applications. When it comes to data, machine learning engineers must consider an extra set of considerations before integrating them into products. Some include data privacy and security, user consent, and regulatory constraints. Many a company has fallen into trouble for mining user data without their consent. Another important matter that ML engineers often forget in applied settings is model decay. Unlike academic research, machine learning models used in real-world applications must be retrained and updated regularly. As everyday data changes, machine learning models “decay” and their performance deteriorates. For example, as life habits changed in wake of the covid lockdown, ML systems that had been trained on old data started to fail and needed retraining. Likewise, language models need to be constantly updated as new trends appear and our speaking and writing habits change. These changes require the ML product team to devise a strategy for continued collection of fresh data and periodical retraining of their models. Finally, integration challenges will be an important part of every applied machine learning project. How will your machine learning system interact with other applications currently running in your organization? Is your data infrastructure ready to be plugged into the machine learning pipeline? Does your cloud or server infrastructure support the deployment and scaling of your model? These kinds of questions can make or break the deployment of an ML product. For example, recently, AI research lab OpenAI launched a test version of their Codex API model for public appraisal. But their launch failed because their servers couldn’t scale to the user demand. The Codex Challenge servers are currently overloaded due to demand (Codex itself is fine though!). Team is fixing… please stand by. — OpenAI (@OpenAI) August 12, 2021 Hopefully, this brief post will help you better assess your machine learning project and avoid mistakes. Read Lones’s full paper , titled, “How to avoid machine learning pitfalls: a guide for academic researchers,” for more details about common mistakes in the ML research and development process. Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. This story originally appeared on Bdtechtalks.com. Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,970
2,022
"How to train ML models more efficiently with active learning | VentureBeat"
"https://venturebeat.com/2022/01/31/how-to-train-ml-models-more-efficiently-with-active-learning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored How to train ML models more efficiently with active learning Share on Facebook Share on X Share on LinkedIn Presented by Labelbox How much time is your machine learning team spending on labeling data — and how much of that data is actually improving model performance? Creating effective training data is a challenge that many ML teams today struggle with. It affects nearly every aspect of the ML process. Time: Today, ML teams spend up to 80% of their time on curating, creating, and managing data. This includes time spent labeling, maintaining infrastructure, preparing data, training labeling teams, and other administrative tasks. This leaves very little time for ML engineers to engineer their models. Quality: A model can only become as good as the data it trains on, so producing high quality training data is an imperative for advanced ML teams. Ensuring that every asset in a large dataset is labeled accurately takes even more time and resources, from getting input from domain experts to creating review processes for training data. The iterative cycle: Machine learning, like software development, requires an iterative process to produce successful results. While software developers can iterate on an application multiple times a day, the iterative cycle for ML teams can take weeks or months. This is mostly due to the amount of training data required to get an algorithm up to the required level of accuracy. Data: Usually, ML teams simply label all the data they have available to train their model — which not only takes time and resources to label well, but also requires more complicated labeling infrastructure to support higher volumes of data. As their slow cycles progress, ML teams also typically experience diminishing performance gains, so that even larger amounts of training data are required for small improvements in performance. Above: While the number of annotations and costs increase over time as a model is trained, its performance sees diminishing returns. Teams struggling to speed up their iteration cycle and better allocate their resources between producing training data and evaluating and debugging model performance can benefit from using active learning workflows for training their models faster and more efficiently. Benefits of active learning Active learning is an ML method in which models “ask” for the information they need to perform better. This method ensures that a model is trained only on the data most likely to increase its performance. It can help ML teams make significant improvements in speed and efficiency. Teams that embrace this method: Generate less training data, saving labeling time and costs, making it easier to produce high quality labels, and reducing the time between iterations Have a better understanding of how their models perform, so that engineers can make data-driven decisions when developing their algorithm Curate training datasets more easily based on model performance Better data, not more data Active learning shifts focus from the quantity of training data to the quality of training data. A data-centric approach to ML has been lauded as a necessary pivot in AI by leaders in the space, including Andrew Ng of DeepLearning.ai. If the model is only as good as the data it’s trained on, the key to a highly performant model is high-quality training data. And while the quality of a labeled asset depends partially on how well it has been labeled and how it was labeled compared to the specific use case or problem the model is being created to solve, it also depends on whether the labeled asset will actually improve model performance. Employing active learning requires that teams curate their training datasets based on where the model is least confident after its latest training cycle — a practice that, according to my experience at Labelbox and recent research from Stanford University , can lead to equivalent model performance with 10% to 50% less training data, depending on your previous data selection methods. With less data to label for each iteration, the resources required to label training data will reduce significantly. These resources can then be allocated to ensure that the labels created are of high quality. A smaller dataset will also take less time to label, reducing the time between iterations and enabling teams to train their models at a much faster pace. Teams will also realize more significant time savings from ensuring that each dataset boosts model performance, getting the model to production-level performance much faster than with other data selection methods. Understanding model performance A vital aspect of active learning is evaluating and understanding model performance after every iteration. It’s impossible to effectively curate the next training dataset without first finding areas of low confidence and edge cases. ML teams dedicated to an active learning process will need to track all performance metrics in one place to better monitor progress. They’ll also benefit from visually comparing model predictions with ground truth, particularly for computer vision and text use cases. Above: The Model Diagnostics tool from Labelbox enables ML teams to visualize model performance and easily find errors. Once the team has these systems in place that enable fast and easy model error analysis, they can make informed decisions when putting together the next batch of training data and prioritize assets that exemplify classes and edge cases that the model needs to improve on. This process will ensure that models reach high levels of confidence at a much faster rate than a typical procedure involving large datasets and/or datasets created through random sampling techniques. Challenges of active learning While active learning provides many benefits, it requires specific infrastructure to ensure a smooth, repeatable process over multiple iterations and models. ML teams need one place to monitor model performance metrics and drill down into the data for specific information, rather than the patchwork of tools and analysis methods that are typically used. For those working on computer vision or text use cases, a way to visualize model predictions and compare them to ground truth data can be helpful in identifying errors and prioritizing assets for the next training dataset. “When you have millions, maybe tens of millions, of unstructured pieces of data, you need a way of sampling those, finding which ones you’re going to queue for labeling,” said Matthew McAuley, Senior Data Scientist at Allstate during a recent webinar with Labelbox and VentureBeat. Teams will also need a training data pipeline that gives them complete visibility and control over their assets to produce high-quality training data for their models. “You need tooling around that [annotation], and you need that tooling integrated with your unstructured data store,” said McAuley. ML teams that use Labelbox have access to the aforementioned infrastructure, all within one training data platform. Watch this short demo to see how it works. Gareth Jones is Head of Model Diagnostics & Catalog at Labelbox. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. </em The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,971
2,022
"Top 5 data quality and accuracy challenges and how to overcome them | VentureBeat"
"https://venturebeat.com/2022/04/24/top-5-data-quality-accuracy-challenges-and-how-to-overcome-them"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Top 5 data quality and accuracy challenges and how to overcome them Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Every company today is data-driven or at least claims to be. Business decisions are no longer made based on hunches or anecdotal trends as they were in the past. Concrete data and analytics now power businesses’ most critical decisions. As more companies leverage the power of machine learning and artificial intelligence to make critical choices, there must be a conversation around the quality—the completeness, consistency, validity, timeliness and uniqueness—of the data used by these tools. The insights companies expect to be delivered by machine learning (ML) or artificial intelligence (AI)-based technologies are only as good as the data used to power them. The adage, “garbage in, garbage out,” comes to mind when it comes to data-based decisions. Statistically, poor data quality leads to increased complexity of data ecosystems and poor decision-making over the long term. In fact, roughly $12.9 million is lost every year due to poor data quality. As data volumes continue to increase , so will the challenges that businesses face with validating and their data. To overcome issues related to data quality and accuracy, it’s critical to first know the context in which the data elements will be used, as well as best practices to guide the initiatives along. 1. Data quality is not a one-size-fits-all endeavor Data initiatives are not specific to a single business driver. In other words, determining data quality will always depend on what a business is trying to achieve with that data. The same data can impact more than one business unit, function or project in very different ways. Furthermore, the list of data elements that require strict governance may vary according to different data users. For example, marketing teams are going to need a highly accurate and validated email list while R&D would be invested in quality user feedback data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The best team to discern a data element’s quality, then, would be the one closest to the data. Only they will be able to recognize data as it supports business processes and ultimately assess accuracy based on what the data is used for and how. 2. What you don’t know can hurt you Data is an enterprise asset. However, actions speak louder than words. Not everyone within an enterprise is doing all they can to make sure data is accurate. If users do not recognize the importance of data quality and governance—or simply don’t prioritize them as they should—they are not going to make an effort to both anticipate data issues from mediocre data entry or raise their hand when they find a data issue that needs to be remediated. This might be addressed practically by tracking data quality metrics as a performance goal to foster more accountability for those directly involved with data. In addition, business leaders must champion the importance of their data quality program. They should align with key team members about the practical impact of poor data quality. For instance, misleading insights that are shared in inaccurate reports for stakeholders, which can potentially lead to fines or penalties. Investing in better data literacy can help organizations create a culture of data quality to avoid making careless or ill-informed mistakes that damage the bottom line. 3. Don’t try to boil the ocean It is impractical to fix a large laundry list of data quality problems. It’s not an efficient use of resources, either. The number of data elements active within any given organization is huge and is growing exponentially. It’s best to start by defining an organization’s critical data elements (CDEs), which are the data elements integral to the main function of a specific business. CDEs are unique to each business. Net Revenue is a common CDE for most businesses, as it’s important for reporting to investors and other shareholders, etc. Since every company has different business goals, operating models and organizational structures, every company’s CDEs will be different. In retail, for example, CDEs might relate to design or sales. On the other hand, healthcare companies will be more interested in ensuring the quality of regulatory compliance data. Although this is not an exhaustive list, business leaders might consider asking the following questions to help define their unique CDEs: What are your critical business processes? What data is used within those processes? Are these data elements involved in regulatory reporting? Will these reports be audited? Will these data elements guide initiatives in other departments within the organization? Validating and remediating only the most key elements will help organizations scale their data quality efforts in a sustainable and resourceful way. Eventually, an organization’s data quality program will reach a level of maturity where there are frameworks (often with some level of automation) that will categorize data assets based on predefined elements to remove disparity across the enterprise. 4. More visibility = more accountability and better data quality Businesses drive value by knowing where their CDEs are, who is accessing them and how they’re being used. In essence, there is no way for a company to identify their CDEs if they don’t have proper data governance in place at the start. However, many companies struggle with unclear or non-existent ownership into their data stores. Defining ownership before onboarding more data stores or sources promotes commitment to quality and usefulness. It’s also wise for organizations to set up a data governance program where data ownership is clearly defined and people can be held accountable. This can be as simple as a shared spreadsheet dictating ownership of the set of data elements or can be managed by a sophisticated data governance platform, for example. Just as organizations should model their business processes to improve accountability, they must also model their data, in terms of data structure, data pipelines and how data is transformed. Data architecture attempts to model the structure of an organization’s logical and physical data assets and data management resources. Creating this type of visibility gets at the heart of the data quality issue, that is, without visibility into the *lifecycle* of data — when it’s created, how it’s used/transformed and how it’s outputted — it’s impossible to ensure true data quality. 5. Data overload is increasing Even when data and analytics teams have established frameworks to categorize and prioritize CDEs, they are still left with thousands of data elements that need to either be validated or remediated. Each of these data elements can require one or more business rules that are specific to the context in which it will be used. However, those rules can only be assigned by the business users working with those unique data sets. Therefore, data quality teams will need to work closely with subject-matter experts to identify rules for every unique data element, which can be extremely dense, even when they are prioritized. This often leads to burnout and overload within data quality teams because they are responsible for manually writing a large sum of rules for a variety of data elements. When it comes to the workload of their data quality team members, organizations must set realistic expectations. They may consider expanding their data quality team and/or investing in tools that leverage ML to reduce the amount of manual work in data quality tasks. Data isn’t just the new oil of the world: it’s the new water of the world. Organizations can have the most intricate infrastructure, but if the water (or data) running through those pipelines isn’t drinkable, it’s useless. People that need this water must have easy access to it, they must know that it’s usable and not tainted, they must know when supply is low and, lastly, the suppliers/gatekeepers must know who is accessing it. Just as access to clean drinking water helps communities in a variety of ways, improved access to data, mature data quality frameworks and deeper data quality culture can protect data-reliant programs & insights, helping spur innovation and efficiency within organizations around the world. JP Romero is the technical manager at Kalypso DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,972
2,022
"10 startups riding the wave of AI innovation | VentureBeat"
"https://venturebeat.com/2022/04/11/10-startups-riding-the-wave-of-ai-innovation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 10 startups riding the wave of AI innovation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Organizations are increasingly adopting AI-enabled technologies to address existing and emerging problems within the enterprise ecosystem, meet changing market demands and deliver business outcomes at scale. Shubhangi Vashisth, senior principal research analyst at Gartner , said that AI innovation is happening at a rapid pace. Vashisth further noted that innovations including edge AI, computer vision, decision intelligence and machine learning will have a transformational impact on the market in coming years. However, while AI-powered technologies are helping to build more agile and effective enterprise systems, they usher in new challenges. For example, Gartner notes that AI-based approaches if left unchecked can perpetuate bias, leading to issues, loss of productivity and revenue. AI is fueled by data and if there are errors along the data pipeline , AI models will produce biased results. Only 53% of AI projects make it from prototype to production, according to Gartner research. But it’s not all doom and gloom for the ecosystem. A new survey by McKinsey revealed AI high performers following the best practices are deriving the most benefits from AI and professionalizing or industrializing their capabilities. As more startups ride the next wave of AI to innovate for the enterprise, some startups look poised to lead the pack in 2022 and beyond. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Tracking the pack A report published last month by Statista showed the number of AI-focused startups worldwide was 3,465 in 2018, with 1,393 in the U.S. alone. Another State of AI report from CBS Insights last year said AI startup funding hit a record high of $17.9 billion in Q3. Many players in the ecosystem are jostling to lead the pack with enough investment dollars. But which startups in the ever-evolving AI startup space might require a closer look for enterprises? Here are 10 AI startups that are demonstrating upward growth trajectories in a fast-paced market and whose CEOs have articulated to VentureBeat over the past few months a broader context to their key differentiators, strategies and traction. Below are vital details on these 10 AI startups that are worth watching across diverse industries, including retail, finance, cybersecurity, devops and more. Each company is ranked by its total funding to date, with quotes and metrics supplied during interviews with VentureBeat. DataStax Founded: 2010 Founder(s): Jonathan Ellis, Matt Pfeil Headquarters: California, U.S. Total funding to date: $227.6 million Real-time data company, DataStax , says it helps enterprises unleash the value of real-time data to quickly build the smart, high-growth applications required to become data-driven businesses. Some of the leading digital services used daily for streaming, gaming, social networks, ecommerce and many others are built on DataStax. Companies like Verizon, Audi, ESL Gaming and many others are using DataStax solutions — including DataStax NoSQL cloud database, Astra DB and unified event streaming technology, Astra Streaming — to build real-time, high-scale applications that power their businesses. According to DataStax Chairman and CEO, Chet Kapoor, DataStax provides an open stack for all the real-time data built on the world’s most scalable database (Apache Cassandra) and the most advanced streaming technology (Apache Pulsar), in an open, cloud-native architecture. The company’s open stack helps developers easily build real-time applications that run their businesses. Kapoor said these developers continue to tap the power of advanced event streaming technology based on Apache Pulsar to act instantly on data, drive dynamic customer experiences and take advantage of ML and AI — all on a single data stack that works. He said DataStax uses modern APIs that allow developers to skip the complexity of multiple OSS projects and APIs that don’t scale. DataStax claims its modern data APIs “power commerce, mobile, AI/ML, IoT, microservices, social, gaming and interactive applications that must scale-up and scale-down based on demand.” Kapoor noted that DataStax has an edge over other players in the industry because it’s only open stack that unifies data in motion and data at rest for real-time use, available on any cloud and with pay-as-you-grow pricing. Visier Founded: 2010 Founder(s): John Schwarz, Ryan Wong Headquarters: Vancouver, Canada Total funding to date: $216.5 million Canadian SaaS company Visier Inc. (also Visier) is a HR analytics platform offering cloud-based solutions for workforce analytics and workforce planning. To achieve better team and business management outcomes, leaders need to start by asking the right questions about their workforce. Ryan Wong, cofounder and CEO at Visier, told VentureBeat that Visier provides solutions that relay fast, accurate people data so businesses can enhance productivity and performance, increase employee satisfaction and retention, ensure profitable career planning and ethically upgrade future decision making. Wong said Visier develops its solution with a combination of Scala, Angular, open-source algorithms and proprietary technologies. He said Visier uses AI to enrich the data of an organization with standardized information, enabling organizations to better compare and understand trends over time. He also said Visier provides proven ML predictions that have been verified across hundreds of enterprises. “The prediction models learn patterns from the employee data or organizations and synthesize them into easy-to-understand and actionable information. Visier also uses AI to support analysts in the organization by analyzing the data of an organization as it is created, highlighting and alerting users to new patterns, outliers and potential issues.” While Visier has competition in niche people analytics vendors like One Model and Crunchr, Wong said the company is designed to help organizations accelerate their people analytics strategy in three key areas where other systems and analytics processes fail or fall short. These areas include data management, deployment and user experience. Visier’s list of competitors also includes HCM suite analytics vendors like Workday and Oracle, as well as DIY people analytics using generic BI tools like Tableau and PowerBI. The company continues to focus on answering the important questions that business owners need to grasp how to shape a better business model overall. Having raised $125 million in a series E funding round last year, Visier is on the path to expanding its global influence. Customers include Electronic Arts, Uber, Adobe and more. Visier is expanding its presence in 75 countries with much room to grow. Vic.ai Founded: 2016 Founder(s): Alexander Hagerup, Kristoffer Roil, Rune Løyning Headquarters: New York, U.S. Total funding to date: $62.7 million The founders of Vic.ai set out to reimagine accounting using autonomy and AI. Kristoffer Roil, cofounder and COO at Vic.ai, said Vic.ai is ushering in a new era of intelligent accounting by eliminating manual data entry and completely automating invoice processing — the most manual and inefficient task in accounting. According to Vic.ai cofounder and CEO, Alexander Hagerup, Vic.ai uses proprietary AI technology with algorithms that, having been trained on more than half a billion pieces of data, can handle invoices of all types and formats. The AI operates at up to 99% accuracy, and customers see up to 80% process improvement. Vic.ai also provides customers with business intelligence. By deriving valuable information from financial transactions in real time, leaders can gain a financial edge by making better decisions faster. Unlike RPA solutions, Roil said Vic.ai’s platform doesn’t require rules, templates or configuration to work as it’s been trained on over half a billion invoices and continues to learn from data every day. Reading an invoice is easy, he said, but classifying it correctly requires intelligence — either by a human or more efficiently by an AI solution like Vic.ai. “By pretraining Vic.ai with historical data, you start with incredibly high accuracy rates. Over time, the system learns, adapts and improves to the point where a large percentage of invoices can be processed autonomously. It isn’t only able to read the invoice, but it’s also able to classify a number on an invoice and the correct type of cost,” said Hagerup. While Vic.ai’s biggest competitors include AppZen, ABBYY , Smartli and Mineraltree, the company will continue to pioneer the use of autonomy and intelligence to improve productivity, decision-making and ROI within accounting and finance processes. BUDDI.AI Founded: 2013 Founder(s): Ram Swaminathan, Sudarsun Santhiappan, Venkatesh Prabhu Headquarters: New York, U.S. Total funding to date: Undisclosed The healthcare industry is seeing an astronomical increase in the use of AI, with a report by Gartner saying healthcare organizations’ strategic understanding of AI has matured rapidly. New York-based deep learning platform company, BUDDI.AI , is on the quest to bring digital transformation to the healthcare industry with AI. BUDDI.AI provides clinical and revenue cycle automation solutions for healthcare. The company claims its AI-enabled solutions help to turn unstructured data in healthcare organizations into actionable insights for those along the continuum of care. BUDDI.AI cofounder and CEO, Ram Swaminathan, told VentureBeat that BUDDI.AI’s platform extracts clinical context and automates functions that improve patient care, enhance clinical documentation, streamline medical coding accuracy and improve reimbursements — all of which are integral to a healthy revenue cycle. Swaminathan said since the last 6+ years, BUDDI.AI has innovated an ensemble of proprietary algorithms to perform natural language processing, clinical contextual graphs, natural language generation, negation detectors, optical character recognition, tabular column extraction and several more. The company has 50+ AI as a Service (AIaaS) offerings specifically designed for automating healthcare functions, while offering one of the industry’s best efficacies for production use, according to Swaminathan. BUDDI.AI’s competitors include traditional manual medical coding and medical billing shops that consider practically all other semi-automation companies like Optum, 3M, EPIC, Cerner, Eclinicalworks or Athena Health as collaborators. However, Swaminathan said BUDDI.AI is differentiated from all of them because it autonomously performs medical coding and medical billing across all outpatient medical specialties. He said BUDDI.AI does this by using deep learning algorithms combined with sophisticated systems built by experts — offering contractual guarantees of over 95% accuracy on codes and claims for more than 70% of the monthly volumes. Hyperproof Founded: 2018 Founder(s): Craig Unger Headquarters: Bellevue, Washington Total funding to date: $22.3 million Hyperproof is a compliance operations SaaS platform that aims to make it easier for companies to follow security and compliance protocols. CEO and founder, Craig Unger, began Hyperproof to ensure businesses could complete their compliance work without the redundant, time-consuming and faulty manual processes that often exist. According to Unger, Hyperpoof plans to leverage ML in several different ways — including eliminating repetitive compliance tasks and providing meaningful risk insights to users so that they can make better, more strategic decisions. “Hyperproof will use ML to help our users automatically identify/flag the overlapping requirements across various compliance frameworks — so they can see areas where they’re already meeting requirements and reuse their compliance artifacts to satisfy new requirements.” Later this year, Hyperproof will unveil ML-enabled solutions that automatically identify opportunities for users to set up integrations that will pull in compliance data and also help users to gauge how prepared they are for an upcoming audit. Coalfire’s 2020 survey found that 51% of cybersecurity professionals are spending 40% or more of their budgets on compliance. With $16.5 million series A funding raised in Q4 of 2021, Hyperproof is helping businesses scale and gain visibility by staying compliant. Unger said Hyperproof is the only platform laser-focused on compliance operations to support the people in the trenches who are overwhelmed with compliance/assurance demands from their organization’s customers and regulatory bodies. Data privacy is a major concern in business today, according to Unger. He said the capability to efficiently track, implement and enforce ongoing compliance measures enables organizations to meet higher goals while securely protecting their employees, customers and shareholders. All of this contributes to risk management, audit preparedness and seamless operations. Hyperproof has built dozens of integrations with cloud services that house compliance data including AWS, Azure, GitHub, Okta, Jamf, Jira, ZenDesk and others — enabling automated evidence gathering and seamless collaboration between organizational stakeholders. Strivacity Founded: 2019 Founder(s): Keith Graham and Stephen Cox Headquarters: Virginia, U.S. Total funding to date: $11.3 million Keith Graham and Stephen Cox claim they are reinventing the customer identity and access management (CIAM) space by putting the “C” back in CIAM. Legacy vendors in this space built their solutions primarily for B2E use — prioritizing security and compliance above all else, usually leaving customer experience as an afterthought. Strivacity provides a low-code solution that adds secure customer identity and access management (CIAM) capabilities to a brand’s online properties fast so they can scale to customer demand, grow revenue, stay compliant with fast-changing privacy regulations and personalize their service. Strivacity ingests data derived from ML-based behavioral models as a risk indicator at any point in the consumer lifecycle — helping companies make critical decisions like whether to allow a particular lifecycle event to proceed, or shut down an event entirely when it seems too risky and more. Companies that believe that customer experience, security and compliance are equally vital to the success of their business benefit from Strivacity’s approach to CIAM. For example, a technology company that works with Strivacity noted that Strivacity provides a comprehensive approach to CIAM, and they’re intentional about making sure they’re meeting all the right stakeholders’ needs, from customers to security teams to marketing. “We hear from our customers that, on average, using Strivacity versus another provider reduces development and operational costs by 50% with our workflows and APIs that you can drop right into your apps,” said Graham. Lucinity Founded: 2018 Founder(s): Gudmundur Kristjansson Headquarters: Reykjavík, Iceland Total funding to date: $8.1 million Lucinity CEO and founder, Gudmundur Kristjansson, told VentureBeat Lucinity is on a quest to change the world with its anti-money laundering (AML) technology which empowers banks, fintechs and others in the financial services ecosystem to make data-driven decisions. Today, hardly 1% of money laundering instances are detected or recovered, despite growing regulations and strains on compliance professionals, according to Kristjansson. He said Lucinity’s API-first approach enables it to deploy cutting-edge tech throughout the company’s stack — such as Spark, Kubernetes and React — which has shown to be a successful scale strategy. Lucinity’s unique experience in banking, compliance, regulation and data science has helped them develop a new approach to tackling money laundering — harnessing the best of human intelligence and augmenting it with advanced AI. Their proprietary SaaS platform helps banks quickly identify suspicious behaviors and risk exposures. Lucinity’s behavioral detection empowers compliance teams to not only observe customers’ activity, but to understand them holistically and in-depth, ensuring a leading position in compliance. Other companies try to solve money laundering with AI for AI’s sake, said Kristjansson, but Lucinity focuses on the intersection of humans and machines instead. “At Lucinity, we use Human AI to explain AI findings so that every compliance professional can take on financial crime with the help of technology. We evolve our models with every new client and our programs get better every day. We work with clients to future-proof their business,” he said. With a focus on simple-to-use systems that work with analysts, not against them, Lucinity helps banks and fintechs to get that time and money back with a beautiful, efficient and effective interface designed around the specific needs of modern compliance. Verikai Founded: 2018 Founder(s): Brett Coffin Headquarters: San Francisco, California, U.S. Total funding to date: $6 million Verikai stands as a predictive risk assessment software for the insurance industry. Using ML to help insurance companies and underwriters assess risk, the company says it’s currently the only predictive data tool in the “insurtech market.” With a database of over 1.3 trillion data markers, 5,000 behavior patterns and an abundance of factors that account for over 250 million people, Verikai gives insurance companies insights into individual and group risk like never before. CEO Jeff Chen said Verikai is a predictive data and risk tool for insurance underwriters and brokers. He said alternative data and ML are the core base of Verikai’s products, and they will always have a huge impact on the tools the company provides. Calculating clinical outcomes and behavioral attributes using big data can now give insurance providers accurate, cost-effective forecasts. Real-time census risk reports from Verikai help professionals reduce losses, strategize and improve the complete underwriting process. The company is also providing its business customers with access to suitable insurance products to help HR and employees receive the insurance they need. “As our ML models continue to mature and as we discover new data sources, the ability to provide our customers with the best product models is always our number one priority,” said Chen. HIVERY Founded: 2015 Founder(s): Jason Hosking, Franki Chamaki, Matthew Robards and Menkes van den Brielki Headquarters: Sydney, Australia Total funding to date: $17.7 million HIVERY hopes to fundamentally change the way consumer packaged goods (CPG) companies and retailers collaborate with regard to assortment and space decisions. HIVERY Curate uses proprietary ML and applied mathematics algorithms that have been developed and acquired from Australia’s national science agency — CSIRO’s Data61. With HIVERY Curate, a process that takes six months is reduced to around six minutes, all with the power of AI/ML and applied mathematics techniques. Jason Hosking, cofounder and CEO at HIVERY, said HIVERY’s customers are able to make rapid assortment scenario strategies simulations around SKU rationalization, SKU introduction and space while considering any category goal, merchandising rules and demand transference with HIVERY Curate. Once a strategy is determined, said Hosking, HIVERY Curate can generate accompanying planograms for execution. HIVERY’s proprietary ML models use recommender systems. These ML models can learn from clients’ datasets to make recommendations on assortment at store-level or at any cluster store count required. HIVERY combines ML with applied mathematics methods, often called “operations research” or “OR.” While HIVERY’s ML models recommend products, its OR algorithms factor in real-world rules or constraints to ensure that any recommendations are practical, operational and product-space aware at store level. Hosking said retailers and CPGs currently require multiple solution providers to determine assortment or category strategy, optimize assortment, space, and generate store-level planograms. HIVERY, however, can run assortment strategy simulation and take into consideration any category goals and merchandising constraints into its recommendations — all in one solution — which Hosking said no company does as of now. The company earned a spot on Forbes Asia’s 100 to Watch list last year and was more recently named by CB Insights in its 2022 Retail Tech 100 report — an annual ranking of the 100 most promising B2B retail tech companies in the world. Prospero.Ai Founded: 2019 Founder(s): George Kailas, Adam Plante and Niles Plante Headquarters: New York, U.S. Total funding to date: Undisclosed Prospero.Ai says it’s committed to leveling the playing field in investing with AI and ML as the pillars of its solution. Prospero’s cofounders, George Kailas, Adam Plante and Niles Plante, created a platform that aims to make finance more fair and prosperous for all. Previously from the hedge fund world, CEO George Kailas is passionate about providing institutional-quality investment research for free without conflict of interest. Other fintech companies don’t offer their users the most valuable commodity — the predictions derived from their data — but Prospero is doing things differently, said Kailas. Prospero’s joint IP with NYU, a proprietary AI system, simplifies stock analysis into 10 key signals and educates on how to leverage their predictions to invest better. “Prospero is the first platform that’s completely free while protecting users’ privacy completely. Currently in beta, it aims to reverse the deterioration of the middle class by providing financial tools and literacy for all,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,973
2,021
"What to expect from OpenAI’s Codex API | VentureBeat"
"https://venturebeat.com/2021/08/16/what-to-expect-from-openais-codex-api"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What to expect from OpenAI’s Codex API Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article is part of our series that explores the business of artificial intelligence OpenAI will make Codex , its AI programmer technology, available through an application programming interface, the company announced on its blog on Tuesday. In tandem with the announcement, OpenAI CTO Greg Brockman, Chief Scientist Ilya Sutskever, and co-founder Wojciech Zaremba gave an online presentation of the capabilities of the deep learning model. The Codex demo puts the advantages of large language models to full display, showing an impressive capacity to resolve references and write code for a variety of APIs and micro-tasks that can be frustratingly time-consuming. OpenAI is still testing the waters with Codex. How far you can push it in programming tasks and how it will affect the software job market remain open questions. But this unexpected turn to OpenAI’s exploration of large language models seems to be the first promising application of neural networks that were meant for conversations with humans. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Language models for coding Codex is a descendent of GPT-3 , a very large language model OpenAI released in 2020 and made available through a commercial private beta API. OpenAI’s researchers wanted to see how developers would use GPT-3 for natural language processing applications. But the outcome surprised them. “The thing that was funny for us was to see that the applications that most captured people’s imaginations, the ones that most inspired people, were the programming applications,” Brockman said in the video demo of Codex. “Because we didn’t make the model to be good at coding at all. And we knew that if we put in some effort, we could make something happen.” Codex is a version of GPT-3 that has been finetuned for programming tasks. The machine learning model is already used in Copilot , another beta-test code generation product hosted by GitHub. According to OpenAI, the current version of Codex has a 37-percent accuracy on coding tasks as opposed to GPT-3’s zero percent. Codex takes a natural language prompt as input (e.g., “Say hello world”) and generates code for the task it is given. It is supposed to make it much easier for programmers to take care of the mundane parts of writing software. “You just ask the computer to do something, and it just does it,” Brockman said. In the demo, Brockman and Sutskever take Codex through a series of tasks that range from displaying a simple “Hello World” message in Python to gradually writing a web game in JavaScript. The demo had some impressive highlights, even if it seemed to be rehearsed. For example, Codex seems to be pretty good at coreference resolution. It also links nouns in the prompt to their proper variables and functions in the code (though in the demo, it seemed that Brockman also knew how to phrase his commands to avoid confusing the deep learning model). Codex can perform some tedious tasks, such as rendering web pages, launching web servers, and sending emails. The model also shows some of the zero-shot learning capabilities of GPT-3. For instance, in the demo, Brockman showed how you can add Mailchimp interfacing capabilities to Codex with three lines of instructions. Further down the video, the presenters use Codex to create a user interface in JavaScript, place objects on the screen, and make the objects controllable with the keyboard arrow keys. Another video shows OpenAI generating data science code and generating charts in Python’s matplotlib library. These are not complicated tasks, but they’re tedious and error-prone processes, and they usually require looking up reference manuals, browsing programming forums, and poring over code samples. So, having an AI assistant writing this kind of code for you can save some valuable time. “This kind of stuff is not the fun part of programming,” Brockman said. Maybe I can finally use matplotlib now without spending half a day googling the exact syntax and options! https://t.co/Vak1nzu0Jk — Soumith Chintala (@soumithchintala) August 11, 2021 Per OpenAI’s blog: “Once a programmer knows what to build, the act of writing code can be thought of as (1) breaking a problem down into simpler problems, and (2) mapping those simple problems to existing code (libraries, APIs, or functions) that already exist. The latter activity is probably the least fun part of programming (and the highest barrier to entry), and it’s where OpenAI Codex excels most.” The limits of Codex While the Codex demos are impressive, they do not present a full picture of the deep learning system’s capabilities and limits. Codex is currently available through a closed beta program, which I don’t have access to yet (hopefully that will change). OpenAI also ran a Codex coding challenge on Thursday, which was available to everyone. Unfortunately, their servers were overloaded when I tuned in, so I wasn’t able to play around with it. The Codex Challenge servers are currently overloaded due to demand (Codex itself is fine though!). Team is fixing… please stand by. — OpenAI (@OpenAI) August 12, 2021 But the demo video shows some of the flaws to look out for when using Codex. For example, if you tell human programmers to print “Hello world” five times, they will usually use a loop and print each message on a single line. But when Brockman told the deep learning model to do the same thing, it used an unusual method that pasted all the messages next to each other. As a result, Brockman was forced to reword his instruction more specifically. Codex’s output is not necessarily the optimal way to solve problems. For example, to enlarge an image on the webpage, the model used an awkward CSS instruction instead of just using larger numbers for width and height. And sometimes, the model generates code that is very far off from what the developer intends. In the final ten minutes of the demo, Brockman and Sutskever used Codex to create a JavaScript game. When they instructed Codex to define a condition for game loss, the deep learning model generated an event listener for the spacebar keypress. Brockman fixed it by explicitly telling Codex to write a function for game loss. The video demo also didn’t show any of the limits detailed in full in the Codex paper , including the model’s limits in dealing with multi-step tasks. This omission raised some concern in the AI community.. @OpenAI 's #Codex is to programming as Tesla's FSD 2021 is to driving. Read the paper (esp Appendix B) carefully and you will realize there is a gap between the slick videos & reality: it is often correct on simple tasks, but frequently lost on more complex challenges. 1/3 pic.twitter.com/9VNRIj1wYw — Gary Marcus (@GaryMarcus) August 11, 2021 But despite the limits, Codex can be very useful. Already, those lucky few who have been given access to the API have used it to automate some of the tedious and boring parts of their jobs. And many others who have been working with GitHub’s Copilot have also expressed satisfaction with the productivity benefits of AI-powered code generation. The new @OpenAI Codex model is a pretty exciting piece of technology. Here I made a @Blender add-on and taught it how to use the built in Python API. Taking creative coding to the next level!! pic.twitter.com/0UksTsq1Ep — Andrew Carr (@andrew_n_carr) August 11, 2021 Who should use Codex? In an interview with The Verge , Zaremba compared programming with Codex to the transition from punch cards to programming languages. At the time, the advent of programming languages such as C and Fortran reduced the barrier of entry to software development and made the market accessible to a much larger audience. The same thing happened as higher-level languages appeared and took care of the complex technical challenges of writing code. Today, many programmers write code without worrying about allocating and freeing memory chunks, managing threads, or releasing system resources and handles. But I don’t think Codex is a transition from learning programming languages to giving computers conversational instructions and letting them write the code for themselves. Codex can be a very useful tool for experienced programmers who want an AI assistant to churn out code that they can review. But in the hands of a novice programmer, Codex can be a dangerous tool with unpredictable results. I’m especially concerned about the potential security flaws that such statistical models can have. Since the model creates its output based on the statistical regularities of its training corpus, it can be vulnerable to data poisoning attacks. For example, if an adversary uploads malicious code in GitHub in enough abundance and targeted for a specific type of prompt, Codex might pick up those patterns during training and then output them in response to user instructions. In fact, the page for GitHub Copilot, which uses the same technology, warns that the code generation model might suggest “old or deprecated uses of libraries and languages.” This means that blindly accepting Codex’s output can be a recipe for disaster, even if it works fine. You should only use it to generate code that you fully understand. The business model of Codex I believe the Codex API will find plenty of internal uses for software companies. According to the details in the Codex paper, it is much more resource-efficient than GPT-3, and therefore, it should be more affordable. If software development companies manage to adapt the tool to their internal processes (as with the Blender example above) and save a few hours’ time for their developers every month, it will be worth the price. But the real developments around Codex will come from Microsoft, the unofficial owner of OpenAI and the exclusive license-holder of its technology. After OpenAI commercialized GPT-3, I argued that creating a product and business models on the language model would be very difficult if not impossible. Whatever you do with the language model, Microsoft will be able to do it better, faster, and at a lower cost. And with the huge userbase of Office, Teams, and other productivity tools, Microsoft is in a suitable position to dominate most markets for GPT-3-powered products. Microsoft also has a dominating position with Codex, especially since it owns GitHub and Azure, two powerhouses for software development, DevOps, and application hosting. So if you’re planning to create a commercial product with the Codex API, you’ll probably lose the competition to Microsoft unless you’re targeting a very narrow market that the software giant will not be interested in. As with GPT-3, OpenAI and Microsoft released the Codex API to explore new product development opportunities as developers experiment with it, and they will use the feedback to roll out profitable products. “[We] know we’ve only scratched the surface of what can be done,” the OpenAI blog reads. Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,974
2,021
"API security 'arms race' heats up | VentureBeat"
"https://venturebeat.com/2021/11/19/api-security-arms-race-heats-up"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages API security ‘arms race’ heats up Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Enterprises are starting to catch on to the massive security risk that the pervasive use of application programming interfaces (APIs) can create, but many still need to get up to speed. Poorly secured APIs have been recognized as an issue for years. Data breaches of T-Mobile and Facebook discovered in 2018, for instance, both stemmed from API flaws. But API security has now come even more to the forefront with enterprises across all industries in the process of turning into digital businesses — a shift that necessitates lots and lots of APIs. The software serves as an intermediary between different applications, allowing apps and websites to access more data and gain greater functionality. The implication of APIs in high-profile hacks such as the SolarWinds attack is also spurring more companies to pay attention to the issue of API security — though many still have yet to take action, says Gartner’s Peter Firstbrook. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “In most organizations, when I ask them who’s responsible for API security, there are blank stares around the table,” he said at the Gartner Security & Risk Management Summit — America’s virtual conference this week. That needs to change, said Firstbrook, a vice president and analyst at the research firm. API security vendor Salt Security reported that its customer base saw a 348% increase in API-based attacks over the course of the first six months of 2021. “APIs are an increasing attack point,” Firstbrook said. “The internet runs on APIs. There’s a huge need for API security.” By 2022, the vast majority of web-enabled apps—90%—will have more surface area exposed for attack in the form of APIs than via the human user interface, according to Gartner research. “This is a call to action [because] most of our security testing focuses on dynamic application security testing of the user interface,” said Neil MacDonald, a vice president and analyst at Gartner, during another session at the research firm’s conference this week. “We’re saying, the bulk of the application is below the waterline—it’s APIs,” MacDonald said. “It’s program-to-program, system-to-system, application-to-backend—API calls. Those are now the new surface area for attack. They need to be part of your overall security strategy.” Momentum in the market Increasingly, businesses are starting to get the message. There are signs that more customers are investing to secure their APIs, while the number of products in the space also continues to expand. Salt Security, which was founded in 2016 and has offices in Silicon Valley and Israel, has revealed the names of numerous customers including The Home Depot, data center operator Equinix, and telecom firm Telefónica. To fuel its growth, the company has announced raising $100 million over the past year, including a $70 million series C round in May. A newer entrant in the space, Noname Security , reports rapid traction for its API security product since launching it in February. The startup already counts among its customers two of the world’s five largest pharmaceutical firms, one of the world’s three largest retailers, and one of the world’s three largest telecoms, said Karl Mattson, chief information security officer at Noname Security. The Palo Alto, California-based company has raised $85 million since its founding in 2020, including a $60 million series B round in June. Other firms with notable API security offerings include Akamai , Ping Identity, 42Crunch, Traceable, Signal Sciences (owned by Fastly), and Imperva—which this year bolstered its API security platform with the acquisition of a startup in the market, CloudVector. Additional startups in the space include Neosec, which came out of stealth in September and announced a $20.7 million series A round, while established vendors that have introduced API protection features include Barracuda and Cloudflare. But as evidenced by the Salt Security report on increased API-based attacks, it’s not just the defenders that are ramping up around the API security issue. “It’s an arms race right now,” said Noname’s Mattson. “I think attackers are seeing that APIs are not overly complicated to attack and to compromise. And similarly, the defenders are rapidly coming to the realization, too.” API exploits The most frequent API-based attacks involve exploitation of an API’s authentication and authorization policies, he said. In these attacks, the hacker breaks the authentication and the authorization intent of the API in order to access data. “Now you have an unintended actor accessing a resource, such as sensitive customer data, with the organization believing that nothing was awry,” Mattson said. This so-called “leaky API” issue has been behind many of the highest-profile breaches related to APIs, he said. Another issue is that API calls are now being used to start or stop a critical business process — for instance, a broadcasting company that initiates a broadcast stream or a power company that turns a home’s electricity on or off using an API call, Mattson said. That level of dependence on APIs raises the security stakes even further, he said. Firstbrook said that the API security aspects of the SolarWinds attack also show how pivotal the issue can be. Through the malicious code implanted in the SolarWinds Orion network monitoring software, the attackers gained access to an environment belonging to email security vendor Mimecast, he noted. And Mimecast — because it provides capabilities such as anti-spam and anti-phishing for Microsoft Office 365 users — had access to the Office 365 API. Thus, through the Microsoft API key, the attackers gained access to the Exchange environments of a reported 4,000 customers, Firstbrook said. Mimecast, which published its report on the incident in March, declined to provide further comment to VentureBeat. Ultimately, it’s clear that there is a need for a much greater focus on API security across industries, Firstbrook said. “Part of the supply chain is built on APIs,” he said. “We really have to build a best practice around managing and understanding APIs, and securing APIs.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,975
2,022
"Pentesting firm NetSPI expands into attack surface management | VentureBeat"
"https://venturebeat.com/2022/02/22/pentesting-firm-netspi-expands-into-attack-surface-management"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Pentesting firm NetSPI expands into attack surface management Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Exposure of internet-facing enterprise assets and systems can bring major risks for security. And yet, often, enterprises aren’t even aware of all the internet-facing assets they have — which of course makes it impossible to go about securing those assets and systems. As digital transformation continues turning all enterprises into internet companies, to one degree or another, this problem of exposed assets and systems is growing fast. And that has led to the emergence of a new category of security technology: External attack surface management, or EASM. The technology — sometimes referred to simply as attack surface management, or ASM — focuses on identifying all of an enterprise’s internet-facing assets, assessing for vulnerabilities and then remediating or mitigating any vulnerabilities that are uncovered. A separate discipline within security is penetration testing, or pentesting, in which a professional with hacking expertise performs a simulated attack and tries to breach a system, as a way to uncover vulnerabilities that need to be addressed. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Today, enterprise pentesting firm NetSPI announced that it’s bringing the two worlds together, with the debut of its new attack surface management offering. The solution integrates the company’s pentesting experts into the attack surface management process, as a way to improve the triage and remediation of risky exposures, said Travis Hoyt, CTO at NetSPI. “EASM does not typically include manual pentesting — at least not in the way NetSPI incorporates it into our new offering,” Hoyt in an email to VentureBeat. However, “both are necessary to truly accomplish a holistic, proactive security program,” he said. “In today’s threat environment, conducting a pentest once a year is no longer effective given the rate at which the attack surface is changing. EASM ensures that corporate networks have constant coverage and attack surface visibility.” ‘Comprehensive understanding’ When implemented together, EASM and external network pentesting “give organizations a comprehensive understanding of the weaknesses on their external attack surface – and a better path forward for efficient remediation,” Hoyt said. Currently, many of the key players in the EASM market focus on the monitoring and inventory of assets, and are heavily reliant on technology to accomplish this, he said. NetSPI’s offering, on the other hand, will stand out by integrating human pentesting experts into attack surface management — allowing the company to manually pentest exposures to determine the risk each poses to an organization, Hoyt said. In other words, “the NetSPI team is constantly looking at your attack surface to prioritize the exposures that matter most to your business, by using proven methodologies from our two decades dedicated to pentesting,” he said. A key challenge that security leaders face today is keeping up with the rate of change, Hoyt said. “New things pop up on the external network all the time, often without IT awareness,” he said. “Security leaders today are tasked with keeping track of all assets and understanding the risk of every exposure, which is no easy task.” Attack surface management, however, can help organizations get a comprehensive view of all of their assets and exposures — including unknown assets — allowing them to dramatically increase their visibility, Hoyt said. Best of both worlds Many other companies in attacks surface management either provide scanning that is an entirely manual process, or offer pure technology platforms that operate without intervention from humans, he said. NetSPI’s solution aims to take the best attributes of those two delivery models, Hoyt said. The company’s attack surface management platform features automated scanning and orchestration technology that identifies and maps all assets on a company’s external attack surface. The platform also continuously monitors the attack surface and provides an alert when it detects a high-risk exposure. NetSPI’s operations team then steps in to triage exposures — by validating the issue, evaluating what sort of risk it poses and advising the customer about remediation. “There’s no replacement for human intuition. A tool simply cannot chain together vulnerabilities the way a human can, nor understand an exposure’s true risk to business operations,” Hoyt said. Ultimately, with the introduction of its ASM offering , NetSPI now offers customers a “full suite of offensive security solutions,” he said — for the first time providing customers with “truly continuous testing.” Growth spurt Founded in 2001, NetSPI has seen its business — and headcount — take off over the past few years. The company’s organic revenue growth grew by 51% in 2021, following 35% organic revenue growth in 2020. And NetSPI now reports having 577 customers, up from 321 customers at this time a year ago. While the company lists the names of several customers on its website, that list does not include NetSPI’s marquee customers. Among some of the company’s customers are “the top cloud providers, three of the five FAANG companies, nine of the 10 top U.S. banks and many of the Fortune 500,” Hoyt said — with FAANG referring to the elite grouping that consists of Meta (Facebook), Apple, Amazon, Netflix and Alphabet (Google). Meanwhile, the Minneapolis-based company has expanded its staff to more than 300 — over half of which are full-time pentesters — which is up from 200 employees a year ago. NetSPI expects to increase its headcount by another 20% to 30% by the end of this year. Hoyt himself joined the company in August, after holding roles at TIAA for two years and at Bank of America for nearly two decades. NetSPI has raised $100 million in funding to date, with $90 million of that amount raised in May 2021. The growth funding round was led by KKR with backing from Ten Eleven Ventures, as well. All in all, looking ahead, “we can only expect breaches to become more frequent as the attack surface continues to expand, in tandem with growing sophistication of hacking techniques,” Hoyt said. “NetSPI’s unique offering helps support internal security teams by providing an extra set of eyes, both physical and digital, and acting as a true extension of our customers’ teams.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,976
2,022
"New Relic releases new vulnerability management solution | VentureBeat"
"https://venturebeat.com/2022/05/18/new-relic-vulnerability-management"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages New Relic releases new vulnerability management solution Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, full-stack observability provider New Relic announced the launch of a vulnerability management solution designed to enable devops teams, site reliability engineers (SREs) and infosec teams to make sense of security vulnerabilities at scale. Observability tools like New Relic have the potential to gather data applications throughout the environment, so enterprise security teams can efficiently identify and mitigate risks throughout the software development life cycle. Dealing with vulnerability sprawl The announcement comes as the number of vulnerabilities in enterprises have multiplied dramatically, with 19,733 software vulnerabilities reported in 2021 alone. With a high volume of vulnerabilities to manage, security teams are struggling to keep up, lacking both visibility into their infrastructure and a solution to prioritize remediation of high-risk vulnerabilities first. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Securing modern software is a complex problem that is seemingly increasing in complexity by the day,” said Ishan Mukherjee, general vice president of product go-to-market at New Relic. “The recent Log4j vulnerability is an example of the challenges security and devops teams face when running on modern architecture, and why we need to close the gaps between ITops and security.” Mukherjee suggests that observability tools are the answer to the challenge of managing vulnerabilities at scale, by unifying application security telemetry, prioritizing risks in the environment and identifying actions users can take to remediate them. “Observability tools are uniquely suited to troubleshoot and get more data on these systems when security vulnerabilities crop up because teams can extract the information they need without having to deploy more agents,” Mukherjee said. The top providers in the vulnerability management market New Relic’s product launch comes as researchers expect the global security and vulnerability management market to grow from $13.8 million in 2021 to $18.7 billion by 2026 as the number of vulnerabilities emerging increases and organizations are pressured to comply with ever-expansive data protection regulations. While New Relic is an observability-focused security provider that focuses on the visibility of software engineering teams, it’s entering a space where it’s also competing against traditional vulnerability management solutions. One of the organization’s main competitors is Tenable , with Nessus, a vulnerability scanner with six-sigma accuracy (the lowest false positive rate in the industry) that covers over 69,000 CVE’s and is used by over 30,000 organizations. Tenable recently announced raising revenue of $541.1 million last year. Another competitor is Rapid7 by InsightVM, which recently reported annual recurring revenue of $599 million, with live vulnerability management dashboards, risk prioritization, and attack surface monitoring. However, Mukherjee argues that New Relic differentiates itself from other providers by offering transparency signals across the entire tech stack. “New Relic is the only observability platform that allows customers to easily aggregate existing security data from other providers alongside vulnerabilities detected by New Relic agents in one central view,” Mukherjee said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,977
2,021
"CISA warns of credential theft via SolarWinds and PulseSecure VPN | VentureBeat"
"https://venturebeat.com/2021/04/25/cisa-warns-of-credential-theft-via-solarwinds-and-pulsesecure-vpn"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CISA warns of credential theft via SolarWinds and PulseSecure VPN Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Attackers targeted both the Pulse Secure VPN appliance and the SolarWinds Orion platform in an organization, the U.S. government said in an incident report last Thursday. Enterprises have been rocked by reports of cyberattacks involving mission-critical platforms over the past year. In the past few months, security teams have been busy investigating a growing list of cyberattacks and vulnerabilities to figure out whether they were affected and to apply fixes or workarounds as needed. The supply chain attack and compromise of the SolarWinds Orion platform reported at the beginning of the year was just the beginning. Since then, there have been reports of attacks against Microsoft Exchange , the Sonicwall firewall, and the Accellion firewall , to name just a few. Defenders also have a long list of critical vulnerabilities to patch, which have been found in multiple widely used enterprise products , including Vmware and F5’s BIGIP appliance. Chained vulnerabilities The alert from the U.S. Cybersecurity and Infrastructure Security Agency (CISA) is an unsettling reminder that attackers often chain vulnerabilities in multiple products to make it easier to move around within the victim network, cause damage, and steal information. Compromising the Pulse Secure virtual private network appliance gave attackers initial access to the environment. SolarWinds Orion platform has been used to perform supply chain attacks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In the incident report, CISA said the attackers initially obtained credentials from the victim organization by dumping cached credentials from the SolarWinds appliance server. The attackers also disguised themselves as the victim organization’s logging infrastructure on the SolarWinds Orion server to harvest all the credentials into a file and exfiltrate that file out of the network. The attackers likely exploited an authentication bypass vulnerability in SolarWinds Orion Application Programming Interface (API) that allows a remote attacker to execute API commands, CISA said. The attackers then used the credentials to connect to the victim organization’s network via the Pulse Secure VPN appliance. There were multiple attempts between March 2020 and February 2021, CISA said in its alert. Supernova malware The attackers used the Supernova malware in this cyberattack, which allowed them to perform different types of activities, including reconnaissance to learn what’s in the network and where information is stored, and to move laterally through the network. This is a different method than was used in the earlier SolarWinds cyberattack , which compromised nine government agencies and about 100 private sector companies. “Organizations that find Supernova on their SolarWinds installations should treat this incident as a separate attack [from Sunburst],” CISA wrote in a four-page analysis report released Thursday. It appears the attackers took advantage of the fact that many organizations were scrambling in March 2020 to set up remote access for employees who were suddenly working from home because of the pandemic. It’s understandable that in the confusion of getting employees connected from completely different locations, the security team missed the fact that these particular remote connections were not from legitimate employees. None of the user credentials used in the initial compromise had multi-factor authentication enabled, CISA said. The agency urged all organizations to deploy multi-factor authentication for privileged accounts, use separate administrator accounts on separate administrator workstations, and check for common executables executing with the hash of another process. While CISA did not attribute the combined cyberattack to anyone in its alert, it did note that this cyberattack was not carried out by the Russian foreign intelligence service. The U.S. government had attributed the massive compromise of government and private organizations between March 2020 and June 2020 to the Russian Foreign Intelligence Service (SVR). Security company FireEye last week said Chinese state actors had exploited multiple vulnerabilities in Pulse Secure VPN to break into government agencies, defense companies, and financial institutions in the U.S. and Europe. Reuters said Supernova was used in an earlier cyberattack against the National Finance Center — a federal payroll agency inside the U.S. Department of Agriculture — reportedly carried out by Chinese state actors. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,978
2,022
"Picnic helps orgs protect data from social engineering threats | VentureBeat"
"https://venturebeat.com/2022/02/23/picnic-helps-orgs-protect-data-from-social-engineering-threats"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Picnic helps orgs protect data from social engineering threats Share on Facebook Share on X Share on LinkedIn Social engineering (tricking people to get info) is encouraged at Defcon. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, social engineering prevention and detection platform Picnic announced it had raised $14 million as part of a Series A funding round led by Crosslink Capital and Rally Ventures with participation from Energy Impact Partners. The startup aims to address the challenge of social engineering threats that have successfully evaded traditional cybersecurity controls by continuously monitoring an organization’s online digital footprints and public data from over 1,000 data sources, and analyzing whether an attacker could use that information to create a scam. Picnic offers a preventative solution to social engineering attacks which allows them to see what data an attacker could gather on their employees from social media, data brokers, breach repositories, and the dark web. The Era of social engineering The announcement comes as social engineering attacks run rampant, with the average organization targeted by over 700 social engineering attacks each year, with over 12 million spear phishing and social engineering attacks taking place between May 2020 and June 2021. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These attacks are prevalent because there’s no silver bullet antivirus or anti-malware platform that can prevent an attacker from gathering information about a company or individual online, and coordinating targeted outreach to trick them into handing over sensitive information. “Social engineering is the single largest and most challenging problem in cybersecurity that includes myriad attacks (phishing, impersonation, BEC, identity theft, etc). These kinds of attacks have one common thread: they seek to trick targeted people into doing something by leveraging personal data about the target and their personal and professional networks,” said Matt Polak, founder of Picnic. “Traditional approaches have tried to solve this problem by technical means (email gateways, endpoint protection, MFA, etc) and through training. Unfortunately, technical solutions are defeated by hackers, for example, by running a staged attack, and training does little to inoculate users against more than the most basic ‘Nigerian Prince’ types of scams,” he said. Instead, Picnic addresses social engineering threats by identifying publicly available information, or open-source intelligence (OSINT), in print, about individuals or organizations, so that organizations can remove it and deny potential attackers of reconnaissance data. The idea is to prevent attackers from putting together public information to target employees with social engineering scams. A new preventative approach to social engineering Picnic is part of the global cybersecurity market , valued at $153 billion in 2020 and anticipated to reach $366 billion by 2028, which only has a handful of social engineering solutions, including security awareness training providers. One such provider is KnowBe4 , which provides security awareness training to teach employees how to detect social engineering attempts, which has over 40,000 customers, and announced $262.2 million in annual recurring revenue last year. Another is Barracuda , which offers security awareness training based on real-world threat templates of malicious emails, and analyzes how effectively employees can spot phishing attacks, which is on target to reach $1 billion in sales between 2023 and 2024. These preventative approaches aim to mitigate social engineering threats by teaching employees how to spot manipulation attempts in the form of phishing emails that aim to mislead them into clicking through to phishing websites. However, while there are other cybersecurity vendors tackling the issue of social engineering threats, Picnic’s approach is unique in that it tackles the online digital footprint of enterprises. “Picnic has created the first technology platform of its kind capable of addressing public data vulnerabilities preemptively, efficiently, and comprehensively at an integrated, enterprise-wide level,” said Polak. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,979
2,022
"4 ways to strengthen Azure AD security | VentureBeat"
"https://venturebeat.com/2022/04/26/4-ways-to-strengthen-azure-ad-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community 4 ways to strengthen Azure AD security Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Many organizations embrace hybrid architectures as the first step in a cloud-adoption journey. In such environments, the integration of Microsoft Active Directory and Azure Active Directory (Azure AD) can simplify administration. However, Active Directory is a hot target for threat actors — especially in hybrid deployments, which can complicate authentication management. Any breach in your identity services can grant malicious users access to your applications and business-critical data. A compromised Active Directory account can enable an on-premises attack to extend to the cloud and vice versa, as evidenced by the infamous SolarWinds attack. This type of compromise can be difficult to detect and mitigate. There is a growing need to focus on hybrid identity management — in other words, how you manage authentication to ensure comprehensive security. Although Active Directory and Azure AD are alike in name, the differ widely in the way they function and in their associated security models. Therefore, a paradigm shift is required to manage security in a hybrid identity environment, particularly in four key focus areas: role-based access control (RBAC), application security, federated authentication and multifactor authentication (MFA). 1. Evaluate RBAC options Azure AD uses RBAC for authorization. Users are assigned roles with predefined permissions that allow or deny access to cloud resources. The rule of thumb is to follow the principle of least privilege (i.e., provide minimal permissions and only while required). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Azure RBAC uses two types of roles: built-in and custom. Built-in roles come with a predefined set of permissions, which makes life easier for administrators but can provide more access than required. If compromised during an attack, these roles could be exploited by threat actors to facilitate lateral movement. Custom roles let you customize permissions, enabling you to strictly control access to cloud resources. To further support the principle of least privilege, you can create Administrative Units in your Azure AD tenant. You can use this capability to further restrict which objects various IT team members can manage, via a specific RBAC role. Only native Azure AD accounts should be made members of those highly privileged Azure AD roles. 2. Audit application permission settings Using Azure AD for third-party application authentication could extend your risk perimeter. Some applications read and store Azure AD data in external databases. Others request more permissions in Azure AD than they require to operate. Furthermore, additional security measures like MFA might not work for some apps. For example, many email clients use legacy protocols such as Exchange ActiveSync (EAS), IMAP, MAPI/HTTP, or POP3, which do not support MFA. If those protocols are enabled in your Azure AD tenant, cybercriminals could try to access your mailboxes without being prompted for a second factor. Implement strict governance and conduct periodic audits of app permissions to identify where additional restrictions are needed. 3. Consider federated authentication alternatives to Active DirectoryFS Traditionally, organizations have used Active DirectoryFS to enable federated authentication in Active Directory environments. However, Active DirectoryFS can pose a security risk in hybrid environments, potentially extending the attack surface of an on-premises breach to the cloud. Microsoft provides alternative solutions, such as password hash synchronization, Active Directory Pass-through Authentication, and Azure Active Directory Application Proxy. You can use these protocols in place of Active DirectoryFS while integrating on-premises Active Directory with Azure AD. Both password hash synchronization and Active Directory Pass-through Authentication enable users to leverage the same password to log in to both on-premises and Azure AD integrated applications. The first option synchronizes an encrypted hash of the on-premises Active Directory to Azure AD, for a hassle-free user experience. The second uses authentication agents and an outbound-only connection model and can be integrated with native Azure AD security measures like conditional access and smart lockout. However, Active Directory Pass-through Authentication relies on the availability of your on-premises Active Directory — a problem during ransomware attacks. For resiliency, consider synchronizing the password hashes of your Active Directory users to Azure AD. Azure Active Directory Application Proxy can configure secure remote access to on-premises applications using Azure AD credentials. The service leverages an application proxy connector for the secure exchange of sign-on tokens. This service can act as the first step to phase-down usage of Active DirectoryFS and adopt a truly hybrid identity model. 4. Enforce MFA MFA provides an additional layer of credentials protection: Even if attackers get hold of a user’s credentials, they also need access to the user’s email, phone or security key to clear the authentication process. This requirement can slow down or flag potential infiltration attempts. For MFA to be truly effective, organizations should implement it for all accounts — not just the privileged ones. Attackers can and do use non-privileged accounts to infiltrate systems and move laterally across account access perimeters. You can use MFA in conjunction with conditional access policies for context-aware security implementation. You can also implement conditions such as trusted locations, organization-managed devices and secure protocols before granting access to resources. Gearing up for hybrid identity protection Hybrid identity protection requires administrative due diligence: enabling the right set of roles in Azure AD, applying airtight security configurations, and adding guardrails such as MFA. In addition, organizations can implement tools that perform continuous assessment and risk profiling, enable visibility into your hybrid identity solution to help track lateral attacks, and provide change-tracking and auto-remediation features to protect against stolen credentials and malicious insiders. No matter how much you fortify your environment, though, threat actors are continuously evolving. Hence, it’s equally important to have a recovery plan for Active Directory and Azure AD, in case an attack occurs. Guido Grillenmeier is chief technologist with Semperis. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,980
2,022
"Google unveils passwordless log-in plans on World Password Day | VentureBeat"
"https://venturebeat.com/2022/05/05/google-goes-passwordless"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google unveils passwordless log-in plans on World Password Day Share on Facebook Share on X Share on LinkedIn FILE PHOTO: The brand logo of Alphabet Inc's Google is seen outside the company's office in Beijing, China, August 8, 2018. Picture taken with a fisheye lens. REUTERS/Thomas Peter Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today on World Password Day, Google unveiled its vision for a passwordless future and announced that it was going to be offering users passwordless authentication options on Chrome and Android. This announcement comes just as Apple , Google and Microsoft have publicized their commitments to support the common passwordless sign-in standard created by the FIDO Alliance and the World Wide Web Consortium, which aims to encourage technology vendors to offer consumers passwordless sign-in opportunities. According to Sam Srinivas, the project management director of authentication security at Google and president of the FIDO Alliance, by 2023 Google plans to enable users to sign in to apps or websites on their phone simply by unlocking their device (and those on a computer will be able to approve sign-ins via a pop-up on their phones). For enterprises, Google’s move away from passwords not only reduces the chance of credential theft on Chrome and Android, but also highlights that the era of using passwords to control access to resources is coming to an end. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The next generation of authentication Password-based security measures have long failed to control user access to resources. Last year, an audit of the dark web found that there were 15 billion stolen passwords online. The reality is that hackers find it easy to steal passwords and can routinely harvest login credentials with phishing scams and brute force hacks because password-driven security relies on gating access to services around a piece of information the user knows. Unfortunately, modern cyber criminals are simply too good at finding out what piece of information a user knows. “We want all of our users to have the best security protections in place — by default — across their devices and accounts. We know that passwords are no longer a sufficient form of authentication — they are painful and easy for bad actors to access — which is why we are doing everything we can to move users away from needing them. Today’s news lays the ground for this password free future,” Srinivas said. “This will require people to use a physical device to authenticate, rather than something they know. Bad actors will always find a way to uncover what a person knows (i.e, phishing), but they can’t take away a physical object over the internet.” Moving to passwordless authentication will helps ensure that users can’t be tricked into giving away their credentials to scammers or having them stolen through brute force, while making it more convenient for users to login. The fast-growing passwordless movement Google isn’t the only provider to recognize the advantages of a passwordless approach not just for mitigating security concerns but for improving the user-experience with a seamless sign-in option, with researchers anticipating that the global passwordless authentication market will grow from a value of $12.79 billion in 2021 to a value of $53.64 billion by 2030. A number of other prominent providers are also experimenting with phasing out passwords alongside the FIDO alliance. For example, Apple recently released a solution called Passkeys that eliminates the need for passwords and allows users to use biometric identification measures like Touch ID and Face ID to log into online accounts. At the start of this year Apple also announced that it had reached an all-time revenue record of $123.9 billion. Microsoft , who recently announced $49.36 billion in revenue , is also starting to test the boundaries of its own passwordless approach, with Microsoft Authenticator. Users can install the Microsoft Authenticator app and link it to their Microsoft account and opt to turn passwordless authentication on so they can log into their Microsoft account without a password. Google’s passwordless solution may have been later than other offerings, but Srinivas argues that the organization has played a critical role in accelerating the passwordless movement so far. “We were the first platform company (i.e., owner of a major OS or browser) to join the FIDO Alliance back in 2013. Since then, we have been encouraging our colleagues across the industry, especially other major platform companies, to join us. We’re thrilled that 300 companies have joined FIDO including Microsoft and Apple, allowing us to now solve an internet wide problem the right way — together, with open standards.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,981
2,021
"Pizza Hut demonstrates why intelligent virtual agents are the future of customer service | VentureBeat"
"https://venturebeat.com/2021/07/14/pizza-hut-demonstrates-why-intelligent-virtual-agents-are-the-future-of-customer-service"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event Pizza Hut demonstrates why intelligent virtual agents are the future of customer service Share on Facebook Share on X Share on LinkedIn The start of Transform’s Conversational AI Summit, presented by Five9, found business leaders bullish on the future of intelligent virtual agents (IVA). If the future of retail customer service is self-service , the majority of those consumer conversations will increasingly be conducted with intelligent virtual agents — and consumer confidence and satisfaction is already rising dramatically. Virtual agents are the extension of a human workforce, and can be engaged with 24/7 over every conversational channel, from phone to web to SMS and messenger. Powered by AI and natural language processing, these virtual agents are making customer service interactions more efficient and effective , and reducing costs for retail companies that are increasingly looking for new ways to boost their bottom line. We are now past the tipping point where self service is the preferred form of customer service regardless of demographic, said Callan Schebella, EVP of product at Five9. “Gartner is suggesting that 85% of customer interactions will start by self-service in 2022, which is only six months away,” he explained. “But most importantly, of those interactions, 70% will start with speech interfaces by 2023 which is also not too far away.” As the pandemic raged over the past 12 months, 55% of businesses reported an increased volume in customer interaction, and the complexity of those sorts of interactions had increased as well — but technology was slow to catch up. Customers have been frustrated because they’ve had to navigate old-fashioned IVR phone trees in their search for help. “This has led to 42% of companies planning to enhance digital self-service functionality, which had traditionally been out of reach of all but the largest enterprises,” Schebella added. Patrick Branley, director of technology at Pizza Hut Australia, and Henry Hernandez, director of NEMT at Alivi, spoke with Schebella about the role these intelligent virtual agents are playing in the new digital workforce. They’re now available to companies of every size, because advances in AI, natural language processing, and speech recognition have dramatically reduced price points and development cycles, and eliminated the need for third-party specialized technologists. Alivi, which provides solutions for health plan partners to better deliver health care benefits, just implemented an AI-powered IVA from Fve9, Hernandex says. Branded ‘Ava’, they’ve been using the virtual agent in multiple channels throughout the customer service experience, and the vast majority of interactions with members have been positive, he added, including from a demographic more accustomed to IVRs (‘press 1 for…’). The response from health planning partners has also been positive — many are even creating branding that includes the Ava digital agent as a perk for members. Since launch, in just the last few weeks, Alivi has been able to activate over 3,500 return ride activations. These are service interactions that can be automated because they require little human expertise, freeing up customer service agents to help callers with more urgent or complex needs. Now they’re adding new implementations of Ava for appointment confirmations, cancellations, and requests for transportation. These are events that decrease wait times dramatically because they typically take a lot of time for contact center staff to process. “Just by being able to implement this small component, so far we’ve been able to reduce member wait times, expedite services, and really reduce the amount of time that our members are waiting for return rides, which are all some of the biggest complaints that health plan members have,” he said. Next up, Branley spoke about the ways the iconic Pizza Hut brand, with a fifty-years’ strong presence in Australia, is using intelligent virtual agents. When the current owners purchased the brand in 2016 from a private equity company, they found they needed to revamp decades-old, legacy-riddled infrastructure, Branley said. However, a challenge they couldn’t simply cast to the side was the iconic national phone number that’s been used in advertising for decades and is part of the Australian sub-culture. Routing those calls to the correct store near a caller remained an ongoing challenge. They started working on a multi-digital transformation project to revitalize the business. However, the platforms they initially turned to were expensive, difficult to maintain, and inefficient, leading to complaints about slow or confused customer service that sent callers to the wrong store. Since adopting the voice recognition features of Five9 and Google APIs, they’ve now been able to overhaul the company’s ordering systems, and begin to accurately send customer calls to their correct locations. The technology recognizes customer phone numbers, which are linked to their call activity and account information, and can pull up previous orders. Their digital ecosystem’s API offers other advanced functions, automating customer service interactions when a customer calls to inquire about wait times and order status. When a customer who has placed an order calls the line for updates, they’ll be greeted by the voice of a virtual agent who welcomes them with their order status in real time. “Before you even say anything, the IVA can play back to you the status of your order,” explains Branley. “So it can say ‘Hey Calum, your pizza’s in the oven, and the ETA is 6:35.’ No interaction needed with a person; we’ve serviced that request before you’ve even asked for it.” To wrap up the panel, Schebella took a technical turn. He walked attendees through an in-depth look at the organization’s IVA platform, including its dashboards, analytics capabilities, and monitoring functionalities, and more. He shared a variety of management tasks and customer scenarios, troubleshooting, and other capabilities, demonstrating the power of the platform. For attendees interested in implementing an IVA system of their own, don’t miss the full-length presentation in the panel video, above! VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,982
2,021
"The making of an intelligent virtual agent (IVA) | VentureBeat"
"https://venturebeat.com/2021/07/20/the-making-of-an-intelligent-virtual-agent-iva"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored The making of an intelligent virtual agent (IVA) Share on Facebook Share on X Share on LinkedIn Presented by Five9 For years, businesses have sought to provide customers with more self-service options and increase automation rates in their contact centers using speech-enabled interactive voice response systems (IVRs). They have also invested heavily in developing web chatbots. However, these systems were complicated to develop and required organizations to purchase, host, and manage a vast array of software, hardware, and equipment. Applications were also created in silos, requiring multiple development projects while making it difficult for applications to share data and context. A number of disruptive innovations have made it easier and more affordable to deploy AI-and-speech-enabled self-service. Vendors like IBM, Google, and Amazon migrated the underlying speech recognition, text-to-speech, and natural language understanding technology to the cloud, packaging it as software-as-a-service. Organizations can now pay as they go for these services rather than buying the technology outright. Additionally, the learning models powering these cloud speech services are training with millions of utterances as consumers speak to their smart devices. This enables the technology to more accurately determine a speaker’s intent because it has learned to recognize the many different ways people phrase their questions and requests. Furthermore, speech application development has become much more streamlined with no-code development tools. Organizations of all sizes are now using this technology to develop and deploy intelligent virtual agents (IVAs) in their contact centers. IVAs can answer the phone with an open-ended question (“How can I help you?”), understand what a customer is asking for, react, and complete simple tasks and transactions. So, what does it take to develop an IVA that makes self-service as effortless as speaking to Alexa or Siri? Read on to find out. 1. Determine which skills your IVA needs to have Just like human contact center agents, each intelligent virtual agent has a set of skills. For example, an IVA with basic skills might simply answer the phone, ask the caller if they want to maintain their place in a call queue and schedule a callback. Or an IVA can have more advanced skills, such as understanding human speech in multiple languages, responding to frequently asked questions in multiple languages, and processing a credit card payment. Organizations can determine the skills their IVA will need based on the types of customers they serve, and the contact center tasks they want to automate (more on that in step 3). 2. Decide which channels to support Customers can interact with IVAs using their channel of choice. They can speak to IVAs over the phone or communicate through text-based channels including SMS, social media messaging apps like WhatsApp, and web-based chatbots. Businesses can determine their customers’ preferred channels through contact center reporting or surveys and extend the organization’s self-service applications across multiple touchpoints. But they must ensure their applications share the same back-end components — databases, reporting, payment gateways, etc. This allows an IVA to maintain context so the conversation can progress seamlessly as it is passed from one channel to another. 3. Choose the tasks you will automate Organizations can use their contact center reporting data, surveys, and input from business leaders and front-line service agents to determine the service tasks that are best suited for automation in their contact centers. Consider which customer intents will be the easiest to fulfil via IVA, which tasks will deliver the most bang for the buck when automated, and which will be most effective at freeing up human agents for more complicated tasks. Once you’ve determined which tasks are most feasible and desirable for IVA handling, weight the opportunities to decide which to tackle first. At the top of the list are the things that are easiest to achieve and deliver the highest value. 4. Select which conversational AI engines you want to use Next, you’ll need to choose the underlying speech services that will power your IVAs. For example, vendors like IBM, Google, and Amazon provide hundreds of different voices with varying accents and tones, and an organization might choose the service with the voice they think their customers will like best. Or they may find one service to be more accurate than another when testing their application. You’ll want to work with an IVA provider that offers plenty of options and the flexibility to switch between the cloud-based speech services freely. Speech technology evolves quickly, and you’ll want to be able to take advantage of the latest advancements without being locked into a particular vendor’s services. 5. Understand how your IVA will be integrated with your UC of CCaaS platform When executing a contact center task, an IVA must be able integrate with an organization’s back-end systems to retrieve the information or application it needs to resolve a request. These systems can include Customer Relationship Management platforms (CRMs), knowledge bases, calendars, and payment gateways, for example. But IVAs can also streamline operations in a cloud-or-premise-based Unified Communications environment. For example, you can integrate and centralize legacy phone and customer service systems into a cloud-based solution that is easier to manage, creating a series of phone extensions that allows your IVA to transfer calls from one location to another in your organization. 6. Build your tasks As mentioned earlier, this step has become much easier thanks to no-code, drag-and-drop IVA development platforms. No-code development platforms provide a web-based design environment, where users with minimal technical expertise can assemble a speech application by dragging and dropping graphical building blocks that tell it what to do. Many platforms also provide pre-built IVA templates for common tasks such as credit card payments or appointment management, which further streamlines the development phase. In some cases, new IVA tasks can be designed and deployed in matter of days. 7. Train and tune your AI An IVA determines how it will fulfil a customer’s intent by matching their utterances to keywords in the task flow. Then it executes the actions that have been assigned to those keywords. IVAs get smarter and more accurate at recognizing customer intents through a supervised learning approach. This approach involves using the data gathered from live caller utterances to train the IVA to match new keywords to intents so that it can understand the different ways users phrase the same intent. 8. Review performance IVAs can log information such as timestamped transcriptions of caller utterances, the detected intents, which prompts and/or transfer announcements have been played to the caller, and the final call destination. This data will help you understand how your application is performing and where you might be able to make improvements. Supervised learning is an ongoing, and currently manual, process because intents are constantly evolving. Fortunately, we’re starting to see IVA providers address this challenge with cost-effective solutions that minimize human supervision in training and help organizations assess the optimal number of intents that will maximize IVA performance. This will make it easier to automatically improve intent detection accuracy, and to identify new intents that will maintain the quality of an IVA over time. Dig deeper: To learn more about IVA development, and what you can create with the latest speech services and code-free design environments, click here. Callan Schebella is EVP, Product Management at Five9. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,983
2,022
"The new customer experience includes the metaverse: What brands need to know | VentureBeat"
"https://venturebeat.com/2022/03/29/the-new-customer-experience-includes-the-metaverse-what-brands-need-to-know"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The new customer experience includes the metaverse: What brands need to know Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Mastering the capabilities to deliver a superior customer experience online and in-app has always been essential. But, in a world impacted by lockdowns, global events, and rapid change — plus a marketplace where identifiers are disappearing due to Apple’s iOS 14 and Google’s impending removal of third-party cookies — you have a perfect storm of conditions for a paradigm shift in personalization. Personalization becomes individualization, marketing and messaging become a value exchange and all of it is fiercely customer-centric. Against this backdrop, brands and businesses have to deliver marketing and messaging that is personal, relevant, and assistive at every step of the customer journey. To make things even more complex, there isn’t just one customer journey. There are multitudes, and they are defined by nuances in how and where customers enter the funnel. Making that match requires marketing expertise and marketing automation. Now, it demands much more than that. Not only must marketers make that match accurately, but they must do so in real-time, and in a reality where the definition of omnichannel now includes the metaverse. Marketers have to be bold and be willing to literally push the boundaries. Why? Because they have to do more with less in a world without limits. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The metaverse and the challenges facing performance marketers today The current marketing landscape is fenced in by privacy regulations and limited third-party data. Marketers have to scrape by on what little third-party data they have while abiding by GDPR and CCPA. Without identifiers and tracking, performance marketing is flawed. It costs more to acquire customers, which is why it is more valuable than ever to retain them. It’s a new dynamic pushing retention marketing to the top of the business agenda — especially because consumer expectations of personalization are at an all-time high. Recent McKinsey research shows that 71% of consumers expect companies to deliver personalized experiences. When brands fail to do so, the majority of consumers get frustrated. Customer frustration in the next normal is a serious threat, as a majority of shoppers switched brands or products during the pandemic — and have plans to retain those shopping behaviors (and loss of loyalties) moving forward. With this unprecedented shift in brand loyalty, customer retention is imperative. To make things worse, marketers have to find their voice and place in the experience economy and architect strategies to excel in a new playing field: the metaverse. From mobile-digital online lifestyle events and esports tournaments that attract millions of consumers and billions of interactions to pathbreaking brand partnerships that deliver a new breed of experiences, the signs of a seismic shift are everywhere. Retention marketing wins in a metaverse world The metaverse is a new marketplace where retention marketers excel. It’s here that “ engineered serendipity” and the ability to engage with customers separates the leaders from the losers. Engineered serendipity is the concept that the ultimate “macro” outcome — be it luck, innovation, or conversions — is the result of thousands of micro-actions. In marketing terms, it’s about noticing those micro opportunities and acting when the moment is right, with the right content, to create a happily unexpected macro experience that delights users. These are the experiences that retain customers and lead to improved brand perception, profits, and engagement. These experiences help customers feel that a company “gets” them, increasing their likelihood of saying “yes” to suggested content and products later. Positive customer experiences boost customer satisfaction rates by 20% and conversion rates by 10% to 15% , while also lowering sales and marketing costs by 10% to 20%. On the whole, increasing customer retention by 5% leads to an increase in profits by 25% to 95%. Engineered serendipity thrives in the metaverse, so long as marketers seize the moment. It’s here that marketers must encourage discovery and let customers take charge of their experiences. Marketers must be able to view and act upon customer data in real-time, and with the right technology, they can. Four keys to success As the metaverse becomes omnipresent, brands and marketers must deliver hyper-personalization without data. Experiences must be driven through insights into individual behavior, enabling engineered serendipity to take place at scale. It is no longer sufficient to react based on the past behavior of many; brands must react to an individual’s behavior right now, in the present moment. For retention marketers to succeed, winning requires: The ability to view and act upon real-time customer data. With the right technology, brands can transform data collection from basic storage to actionable intelligence. An intelligent data layer enables brands to see the activity and demographic data of customers in real-time — what they’re buying, what they’re doing, where they are, and what device they are using. This enables brands to not only respond in real-time to an individual user, but also to build smart user personas and content moving forward. The ability to capture and keep customer interest. Granular and real-time segmentation and personalization will equip marketers to individualize experiences in lockstep with what customers value in any given moment. Platforms that combine real-time analytics, segmentation, and engagement functionality will allow marketers to adapt to minute changes in customer preferences at any given moment, as opposed to after the fact. The ability to be genuine, authentic, and helpful. To paraphrase Mastercard CMO Raja Rajamannar, marketing is serving, not just selling. It’s about individualized engagement, informed by data and enhanced with humanity and empathy. For example, last summer Reebok debuted a new mobile app that helped kids stuck at home set up regulation-sized basketball courts in their neighborhood, through the power of augmented reality. The ability to turn the absence of identifiers into an opportunity for deeper engagement. Growing awareness of data privacy will require marketers to show and tell customers what is being done with their data. Fortunately, two-thirds of consumers are happy to share their data in return for something of value, be it discounts or a more tailored experience. Savvy marketers will view Data Relationship Management (DRM) as an opportunity to deepen trust, rather than simply achieve compliance. Consent requests will evolve from dry legalese to language that is visible, understandable, and relatable. Brands may use videos to clearly articulate the value proposition of sharing one’s first-party data, building trust, reducing hesitation, and encouraging customers to share the essential data that powers retention. When you connect the dots, it becomes clear that the exact same trends that mark the demise of performance marketing herald a new era in retention marketing. Through real-time discovery and engineered serendipity, marketers can provide personalized experiences that feel one:one, not one:many. Rather than making educated guesses and creating an experience based on macro-segmentation, micro-segmentation can make a customer feel uniquely understood — building the loyalty that seeds retention and revenue Abhishek Gupta is the CCO at CleverTap. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,984
2,022
"You can solve the devops talent shortage — with compassion | VentureBeat"
"https://venturebeat.com/2022/03/09/you-can-solve-the-devops-talent-shortage-with-compassion"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community You can solve the devops talent shortage — with compassion Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Chris Boyd, VP of Engineering for Moogsoft A majority (64%) of leaders across various IT functions struggle to find skilled devops practitioners , according to Upskilling 2021: Enterprise DevOps Skills Report. That’s not surprising considering the pandemic accelerated digital transformation and drove the demand for tech talent to an all-time high. Just last year, hiring managers were trying to fill more than 300,000 DevOps jobs in the U.S. And the problem isn’t going away. The scramble for a limited pool of devops talent will continue as more businesses migrate their assets to the cloud and the tech world increasingly shifts to ephemeral machines. Enterprises need devops practitioners to keep up with rapid app and platform improvements, but this tech talent generally has its pick of job options. Many hiring managers and IT leaders are bewildered by exactly how to attract and maintain devops talent. After all, they can no longer rely on traditional methods like hefty salaries and attractive benefits. In today’s world, genuine job fulfillment trumps just about everything else. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Here’s the secret to building your tech team: employers have to care. But what does it mean to actually care? And what does that look like in practical terms? Let’s dive into the specifics. What do I know about devops talent anyway? There once was an overarching message that employees should be lucky just to have a job. Maybe a conversation about job satisfaction and goal achievement would come up in an annual review. But that’s not the narrative anymore. I left a 12-year stint at one of the world’s largest domain registrar and web hosting companies. I had a cushy job, an excellent salary and the unique opportunity to rest and reinvest. But I’m not good at complacency. I’ve been a lifelong tinkerer, and my cushy job no longer aligned with my desire to play with tough problems and undercover creative solutions. To retain people, managers have to nurture employees’ desires and appreciate them in a way that can’t be manufactured. But, outside of preschool, no one is taught how to care. So, how can managers foster a supportive, stimulating environment where employees know they’re valued? Know your people Managers must understand their workforces. This simple directive is often overlooked because it’s time-intensive. But investing time in knowing each employee personally and professionally will bring meaningful rewards to both employers and employees. Frequent touchpoints go beyond sending a signal to employees that managers are interested in them. They also help leaders identify their high-potential employees and determine what makes them tick. If you give your devops talent opportunities because you care about them and understand their professional passions, they are more likely to be engaged in their work and less likely to bolt to the next best thing. I meet with every employee under me, regardless of seniority level. That’s when l hear that one person wants X and another is interested in Y. And I keep those interests in mind, tracking projects in the pipeline and aligning work with employees’ interests. A promising technical lead on one of my scrum teams wanted to work on an upcoming project to productize an algorithm we were implementing into Moogsoft’s platform. I had an ideal project four months out in the roadmap. And I moved it up, delaying something on the engineering side. Did the switch create a headache for me? Absolutely, but ultimately, it was worth feeding this employee’s interests and demonstrating that I supported his development. And you can’t achieve that without investing time in getting to know your people. Be transparent at all costs In my eyes, there are simple reasons why people show up to work each day. And I gauge each employee’s happiness based on these three elements: Roles and responsibilities Compensation Who you work with and for While most employees can deal with one strike (e.g. you like the company and the work but feel undercompensated), most can’t deal with two. I discuss job satisfaction, based on each element of work, and try to be as transparent as possible. Of course, I work to improve any lagging elements. But even if I’m stuck, employees know that I care about them and their work lives. Being transparent is also realizing that a business is a business, and people are going to leave. That’s a risk you take as an employer. While leaders are always at an advantage when it comes to transparent communication, managers should encourage their people to talk openly about when they feel it’s time to pursue other opportunities. Maybe you can fix the problem. Or maybe it really is time for the person to move on, and you can create a transition plan or even help your employee find the right opportunity. Regardless of the specific situation, the more information that’s shared, the better. Provide comedic relief Devops teams don’t need to be reminded that their roles can be stressful. During an outage, they know that there’s an immense amount of the business’s money on the line, and everyone — from the board on down — is watching and waiting. It’s not helpful for me to highlight just how high the stakes are. What is helpful is providing comedic relief. That’s often the best way I can help an engineering team perform at its best while fixing an outage. If the boss is having a good time, the team can also relax. And it’s essential to performance and morale that teams let their guards down and know their skills are trusted. Even without a blood pressure-raising, service-disrupting incident, I want my team to get up in the morning excited to give each other crap on Slack or roast each other with memes while working. After all, no one — devops talent included — wants to stick around for an authoritarian leadership style that requires work to be a slog. There’s a lot of talk about increasing employee engagement and strengthening workplace cultures. While I agree that these are essential to the modern workplace, can’t managers just authentically care and the rest will naturally follow? Chris Boyd is VP of Engineering for Moogsoft DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,985
2,022
"Report: 93% of CIOs say the Great Resignation has made it harder to hire skilled developers | VentureBeat"
"https://venturebeat.com/2022/04/18/report-93-of-cios-say-the-great-resignation-has-made-it-harder-to-hire-skilled-developers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 93% of CIOs say the Great Resignation has made it harder to hire skilled developers Share on Facebook Share on X Share on LinkedIn A 'help wanted' sign in Austin, Texas Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A new global survey of 600 CIOs and IT decision-makers commissioned by Salesforce ’s MuleSoft reveals that organizations find it challenging to attract and retain the skilled developers needed to drive digital transformation. The survey was conducted to better understand the repercussions organizations will experience if there’s no action to improve developers’ work state. The report found that as demand for faster innovation continues to rise, today’s skill shortage is piling pressure on already-stretched teams, creating the risk of developers tapping out — and leaving businesses’ infrastructure at heightened risk. Key findings include that retaining and recruiting talent is harder than ever before. In fact, 93% say the Great Resignation has made it more difficult for their IT teams to retain skilled developers and 86% say it has become more difficult to recruit them in the last two years. Additionally, information overload can be alleviated with automation tools and technologies, as 91% of organizations say they need solutions that automate critical processes for developers to do more with less. Lastly, employees want to feel empowered by their organizations to upskill themselves. The survey found 90% of organizations say empowering more individuals across the business to integrate apps and data for themselves would significantly reduce pressure on developers and accelerate transformation. Organizations recognize the need to empower the wider workforce to take some strain away from developers. Empowering a team is best achieved by encouraging the growth of business technologists – employees from outside the IT department who can play a more active role in digital transformation — while IT maintains security and control. The report’s findings can help IT leaders scale their teams and give their developers and entire workforce what they need to succeed. Read the full report by Salesforce and Mulesoft. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,986
2,022
"Predicting post-pandemic tech startups and industry disruption | VentureBeat"
"https://venturebeat.com/2022/02/17/predicting-post-pandemic-tech-startups-and-industry-disruption"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Predicting post-pandemic tech startups and industry disruption Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article is contributed by Hari Shetty, sector head and senior vice president of technology platforms & products at Wipro Limited. After a disruptive year, enterprises and startups are finding greater success together. Prior to 2020, the term “ disruption ” typically referred to startups and innovators that were doing things differently — disrupting established industries like ecommerce , banking, and health services through a combination of new technologies and innovative business models. But the pandemic pushed companies over a technological tipping point. Now, even leaders who were reluctant to change with the times are embracing technology to keep pace. The turn to remote work in 2020 forced more businesses to integrate collaborative software and platforms into day-to-day operations. Retailers doubled down on ecommerce when their stores closed. When panic buying strained supply chains, distributors started looking to AI for help managing inventory and distribution throughout their supply chains. With the growth of ecommerce and the digitization of nearly every aspect of the economy, the pandemic leveled the playing field and reduced the role of borders and geographic proximity in shaping consumer preferences. Companies that adapted quickly to the new conditions were able to reach global consumers like never before. Increasing digitization has yielded significant benefits that scale accordingly. Just a few years ago, for companies to truly go global, they needed physical locations in other countries. Today, a startup in Bangalore can serve consumers in Bogota quickly and efficiently. Perhaps because the pandemic itself was so disruptive, the concept of “disruption” in business and the tech industry had to shift. Established businesses have started seeing new tech not as a threat to traditional operations but as a tool to help them adapt and stay competitive. Companies that invested heavily in technology prior to or during the pandemic have been able to navigate the crisis more successfully than those that have not. Now, the positive aspects of industry disruption have become synonymous with transitioning to digital business models and leveraging technologies like AI and hyperautomation to increase flexibility, agility, and business resilience. These are universal goals for businesses — building a competitive organization capable of navigating change and end. As a result of these developments, businesses of all sizes have become more enthusiastic about investing in technology. Innovative and path-breaking technology has even found a welcome place in industries that are not typically associated with high levels of digitization such as agriculture. For example, the power of the Internet of Things (IoT) is being leveraged in American farms to increase crop yield and boost flavor by constantly monitoring each aspect of the growing process. But the implications are bigger, smart agriculture has the potential to decrease dependence on pesticides, reduce operational costs, optimize water usage, and ensure better land management. As climate change continues to worsen, disruptive technologies such as this offer a preview of how innovation can help meet growing needs. On the rise: The most popular innovations going forward The pandemic separated tech startups into two groups: those with potential and those likely to fail fast. And reinforced how the concept of disruption itself has evolved. Pathbreaking new business models don’t appear as frequently as they did a decade ago. This doesn’t mean that innovation has stagnated. As technology has become more pervasive in our work and personal lives, the very nature of disruption has evolved. For instance, innovations in cybersecurity will cater to RPA and bot security governance, mitigating attacks on IoT and cyber-physical systems, countering espionage attacks on emerging digital twins, and so on. Companies developing solutions to support remote work and remote collaboration saw their business increase tremendously during the pandemic. Other winners include companies focused on education and remote learning, automation, electric vehicles, battery technology, blockchain, artificial intelligence (AI), and machine learning technologies. Demand is also expected to increase for new tech in healthcare, ecommerce, logistics, and SaaS segments. What’s clear here, and true to the evolution of industry disruption, is that the most popular ideas to come out of the pandemic focus on better understanding the end user — whether that’s a customer or an employee — and on leveraging technology to develop more sustainable business models. Often, these interests are combined and rely primarily on insights generated from vast amounts of data. For example, anticipatory design leverages AI and machine learning to anticipate customer journey and create a design that reflects the current context. This becomes relevant in an emerging business model — direct-to-consumer, or D2C. Along the same lines, and rooted in data, is conversational smart assistance which can help organizations simultaneously streamline operations and improve the employee as well as customer experience. Talent and focus: The keys to successful industry disruption In this transformative time, businesses and startups need top talent to help strategize and develop their next moves. Most large-scale transformation initiatives fail due to a lack of critical talent. Since top-notch tech talent is scarce, companies that can acquire, train, and retain talent will be well equipped to grow and adapt to whatever lies ahead. While it’s difficult to say for sure what the post-pandemic economy will look like, business leaders can prepare their organizations by remaining open to disruptive technologies and embracing the agile operations of smaller organizations. Hari Shetty is sector head and senior vice president of technology platforms & products at Wipro Limited. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,987
2,022
"Report: 78% of U.S. execs rely on AI insights to enhance marketing efforts | VentureBeat"
"https://venturebeat.com/2022/02/04/report-78-of-u-s-execs-rely-on-ai-insights-to-enhance-marketing-efforts"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 78% of U.S. execs rely on AI insights to enhance marketing efforts Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Using artificial intelligence (AI) to create personalized language and content is one of the most impactful tools digital marketers have available to ensure they stay ahead of the curve in ever-evolving markets. In January 2022, Persado and Coresight Research found that 78.3% of U.S.-based executives hold AI accountable for creative development and execution. In doing so, marketers can leverage AI-generated content and track the impact of words and phrases that resonate most with their custmers. As natural language generation (NLG) continues to grow in prominence, marketing teams will evolve to work with AI to help create the ultimate customer experience. Integration of advanced digital technologies within marketing teams will play an increasing role in driving revenue. The Persado study emphasizes that 53.9% of respondents cited they are currently leveraging AI or machine learning to offer a personalized experience to customers. To create an engaging customer experience , marketers rely on data to create and develop personalized customization. In the past, marketers have relied heavily on third-party data, but changes in privacy laws in the U.S. and EU have shifted businesses’ focus to owning first-party data — data that organizations collect directly from their customers (such as demographics, purchase history, etc.). This shift is emphasized within Persado’s research, which found that 78.2 % of U.S.-based executives believe efficient use of first-party data is very or critical for AI in digital marketing. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! First-party data is critical for marketers to create enhanced customer experiences. AI marketing tools enable organizations to anticipate customer needs, create targeted campaigns, and reduce the time to resolve inquiries. Implementation of these AI tools will play an increasing role in driving revenue. This study emphasizes the growing role advanced technologies play in creating positive brand experiences for consumers and generating significant returns on investments. Results are based on findings from a November 2021 Persado and Coresight Research survey of 165 U.S.-based executive business leaders whose companies use AI and ML in digital marketing or plan to do so in the next three years. Read the full report by Persado. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,988
2,022
"Data science could revive targeted marketing after iOS 14 privacy crackdown | VentureBeat"
"https://venturebeat.com/2022/04/23/data-science-could-revive-target-marketing-after-ios-14-privacy-crackdown"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Data science could revive targeted marketing after iOS 14 privacy crackdown Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Even before the iOS 14.5 update , protecting consumer privacy has been a high priority for big tech and for brands and marketing. Over the next few years, major platforms are likely to implement more consumer controls and these controls can further complicate digital marketing strategies. Data scientists are addressing this challenge by figuring out how to preserve consumer privacy while also optimizing ad performance. The perceived tradeoff between privacy and targeted marketing. With the release of iOS 14, Apple’s users can now opt out of targeted tracking and they’re seizing the opportunity. More than 62% of people who upgrade to iOS 14 opt out, reducing the effectiveness of digital advertising, which in turn increases digital marketing costs by more than 20%. With fewer users sharing information, brands are suddenly faced with a famine of the data that has powered targeted advertising for years and, as a result, brands are losing faith in the major social platforms’ ability to reach the right audiences. After the iOS changes, Facebook alone says it will lose $10 billion in revenue this year and other major platforms, like Snapchat, Instagram, Pinterest and TikTok, will also be impacted. While it is essential to protect consumer privacy, targeted advertising is also beneficial to the retail ecosystem: it offers consumers products and services that are far more likely to meet their needs. For brands, targeted marketing serves as a highly efficient acquisition tool, which ultimately factors into overall pricing and the consumer experience. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Protecting consumer privacy vs. marketing: A balancing act The answer lies in leveraging end-to-end encrypted data sources, which provide the best of both worlds. By having a wall between where the users engage (media platforms) and where data analytics live (proprietary databases), consumers have a choice around privacy while also receiving valuable brand information and content. Creating this “wall” between the user and their data helps combat the turbulence caused by the iOS 14 privacy option. It mitigates the lost opportunity from limited consumer tracking with an AI-based algorithmic approach. By constructing custom audiences, companies can put their digital ads in front of the right customers across major platforms. Here’s how it works : By accessing a proprietary database of more than 45 million consumer personas, brands can optimize their digital marketing campaigns by utilizing customized shopper profiles — data that faithfully represents their target consumer but is fully anonymized for privacy. Once these data sets are constructed, data scientists use AI to iterate on high-performing audiences to best fit each unique conversion funnel. They fine-tune the dimensions using targeted experiments and technology that gets smarter with each iteration. These AI-built audiences can then activate social platforms’ built-in targeting engines, which are currently underused as the data signal required to activate effective targeting has been cut off by iOS14 with Android soon to follow. Data science helps drive revenue Using this approach, brands can protect the consumer while still enhancing profitability at scale. AI and machine learning are now being used to help drive revenue, reduce acquisition costs and increase return on advertising spend (ROAS). After the iOS changes were implemented, we worked with furniture company Apt2B to see if it could still profitably target audiences on Facebook using this new approach. The experiment worked. The company actually generated more revenue on less spend. Apt2B’s initial 60-day ROAS for its new curated audiences was 41% better than Apt2B’s standard campaigns. And after three months, they had generated $700,000 in revenue based on a $60,000 investment in advertising spend. Apt2B’s COO Alex Back said, “The new performance achieved was so cost-effective at scale that our team had to triple-check the results. While other businesses have looked elsewhere for advertising solutions, we’re seizing the opportunity and are investing more in the Facebook platform.” The release of iOS 14 may have been a big step towards greater consumer privacy protection, but it does not need to be the end of well-executed targeted marketing, as many had predicted. With sophisticated technology, a large warehouse of anonymized data and machines that can learn as they go, it may just be possible to get the best of both worlds. Alex Song is the founder and CEO of Proxima DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,989
2,021
"Trifacta expands data preparation tools with Databricks integration | VentureBeat"
"https://venturebeat.com/2021/04/07/trifacta-expands-data-preparation-tools-with-databricks-integration"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Trifacta expands data preparation tools with Databricks integration Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Trifacta today announced it has integrated its data preparation tools with a data warehouse platform based on the open source Apache Spark framework provided by Databricks. This is in addition to repositories based on an open source data built tool (DBT) that is maintained by Fishtown Analytics. In both cases, Trifacta is extending the reach of tools it provides for managing data pipelines to platforms that are widely employed in the cloud to process and analyze data, Trifacta CEO Adam Wilson said. Trifacta traces its lineage back to a research project that involved professors from Stanford University and the University of California at Berkley and resulted in a visual tool that enables data analysts without programming skills to load data. In effect, Trifacta automated extract, transform, and load (ETL) processes that had previously required an IT specialist to perform. There is no shortage of visual tools that let end users without programming skills migrate data. But Trifacta has extended its offerings to a platform that enables organizations to manage the data pipeline process on an end-to-end basis as part of its effort to meld data operations (DataOps) with machine learning operations (MLOps). The goal is to enable data analysts to self-service their own data requirements without requiring any intervention on the part of an IT team, Wilson noted. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google and IBM already resell the Trifacta data preparation platform, and the company has established alliances with both Amazon Web Services (AWS) and Microsoft. Those relationships enable organizations to employ Trifacta as a central hub for moving data in and out of cloud platforms. The alliance with Databricks and the support for DBT further extend those capabilities at a time when organizations have begun to more routinely employ multiple cloud frameworks to process and analyze data, Wilson said. In general, data engineering has evolved into a distinct IT discipline because of the massive amount of data that needs to be moved and transformed. While visual tools make it possible for data analysts to self-service their own data requirements, organizations are now also looking to programmatically move data to clouds as part of a larger workflow. Many individuals that have ETL programming expertise, often referred to as data engineers, are now in even higher demand than data analysts, Wilson said. Once considered the IT equivalent of a janitorial task that revolved mainly around backup and recovery tasks, data engineering is now the discipline around which all large-scale data science projects revolve, Wilson noted. In fact, IT professionals with ETL skills have reinvented themselves to become data engineers, Wilson added. “In the last 12 months, data engineering has become the hottest job in all of IT,” Wilson said. It remains to be seen just how automated data engineering processes can become in the months and years ahead. Not only is there more data to be processed and analyzed than ever, the types of data that need to be processed have never been more varied. Going forward, a larger percentage of data will be processed and analyzed on edge computing platforms, where it is created and consumed. But the aggregated results of all that data processing will still need to be shared with multiple data warehouse platforms residing in the cloud and in on-premises IT environments. Regardless of where data is processed, the sheer volume of data moving across the extended enterprise will continue to increase exponentially. The issue now is figuring out how to automate the movement of that data in a way that scales much more easily. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,990
2,022
"What to expect for data prep and data analytics in 2022  | VentureBeat"
"https://venturebeat.com/2022/01/14/what-to-expect-for-data-prep-and-data-analytics-in-2022"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What to expect for data prep and data analytics in 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. While modern organizations generate astronomical amounts of data every day, many still struggle to make sense of it all. However, that’s gradually starting to change, as more data prep and analytics providers come to the surface to empower nonspecialist users to translate raw data into actionable insights. In fact, researchers estimate that the Data Preparation Tools Market size will grow at a CAGR of over 20% to reach a total value of $13.15 billion by 2028, with organizations having a range of accessible data analytics providers to choose from, such as Alteryx , IBM Watson Studio, TIMi Suite, and Incorta. With the data analytics and preparation market in a state of growth, in 2022, more companies those data analytics providers will invest in will develop solutions that give decision makers better transparency over their operational data. Alteryx acquires data preparation provider Trifacta for $400 million Data preparation provider Alteryx announced it invested $400 million to acquire visual data engineering cloud platform Trifacta. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! By acquiring Trifacta, Alteryx intends to use its advanced cloud platform to help customers build a more robust data pipeline with greater profiling and preparation capabilities. It’s worth noting that this acquisition comes after two recent acquisitions, including Hyper Anna, a cloud platform for deriving AI-driven insights, and Lore IO, a data modeling solution. 2021 was a successful year for Trifacta, with the company announcing an annual recurring revenue of $579 million, an increase of 29% from the year before. A key reason for the organization’s success in the market is its ability to differentiate itself from other specialist solutions by ensuring that non-developers have the option to gain access to meaningful insights. IBM acquires Envizi, mParticle acquires Indicative There have also been a number of other notable acquisitions occurring in the data analytics and preparation space. At the start of this year, IBM announced the acquisition of environmental performance management and data analytics provider Envizi for an undisclosed amount. The acquisition will enable IBM to create more effective supply chain management solutions, while offering greater cover of Environmental, Social, Governmental (ESG) metrics. “To drive real progress toward sustainability, companies need the ability to transform data into predictive insights that help them make more intelligent, actionable decisions every day. Envizi’s software provides companies with a single source of truth for analyzing and understanding emissions data across the full landscape of their business operations and dramatically accelerates IBM’s growing arsenal of AI technologies for helping businesses create more sustainable operations and supply chains,” said Kareem Yusuf, general manager of IBM AI Applications in the announcement. In the future, IBM intends to integrate Envizi with some of its existing solutions including IBM Maximo, IBM sterling, IBM Environmental Intelligence Suite, and IBM Turbonomic. Similarly, shortly after raising $150 million in Series E funding in November, customer data infrastructure provider mParticle announced it had acquired customer journey analytics platform Indicative. The company has completed the acquisition in an attempt to help customers gain better visibility over the customer journey, while making it easier to extract data from external data sources like Snowflake. By augmenting its analytics offering, mParticle will be in a strong position to increase its customer base over the course of the year, which already includes brands like NBC Universal, Spotify, and Airbnb. Insights are everywhere Whether an organization wants to monitor ESG goals or the customer journey, there is plenty of data to process, and a growing number of accessible solutions to process it. In 2022, expect to see the number of easy-to-use data preparation tools increase further, as providers aim to break down the barriers to insights. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,991
2,021
"Why citizen development is the wrong model for many enterprises | VentureBeat"
"https://venturebeat.com/2021/08/15/why-citizen-development-is-the-wrong-model-for-many-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why citizen development is the wrong model for many enterprises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Many organizations have tried — and failed — to successfully implement citizen development initiatives. The theory behind the model is alluring: Leverage low-code/no-code (LCNC) tools to enable business users to build their own applications. Who knows what users need better than the users themselves? It’s a pitch that many advocates have been making. But the unfortunate reality is that businesses rarely find citizen development to be the right model for their organizations. There can be a lot of finger pointing for organizations when citizen developer projects fail. As for why, I see it play out any one (or more) of four ways: A lack of expertise Perhaps foremost is the fact that business users lack the skills and technical expertise of professional developers. It’s the old adage that a hammer doesn’t make a carpenter. First, there’s logical expertise. Citizen developers don’t always think to do things like normalize data. They might not know about encapsulating reusable functionality into functions and rules, and instead repeat a lot of procedural work. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Discipline expertise , in the sense of subject matter awareness, is also a concern. For example, knowing which data sets are necessary is just as important as knowing how to query them. In workflow scenarios, understanding the implications of parallel activity and whether two or more things are truly independent of each other, or whether there’s some expectation of sequence that’s never been written down, is far from trivial. Finally, there is method expertise. This is more about methodologies and project management than tools or algorithms. It includes things like iterative development cycles, putting applications through development-testing-production stages, and delivery rings for testing new features on a subset of users before rolling them out to successively larger groups. This is usually the kind of thing citizen developers safely avoid — unless their creations become popular. A lack of vision This isn’t a lack of vision of what users want, as citizen developers almost always know their users (because it’s themselves). Instead, this is a lack of vision into what the application will need to become next; how it can be maintained and evolved. Too many applications become crises the moment they acquire two or more users. Too many citizen developers find themselves saying, “It was a great solution — until I shared it.” If you’re developing a solution for yourself, it’ll reflect the benefits you wanted and will incur costs/tradeoffs you deem acceptable. It reflects your priorities. The problem is that someone else’s priorities are almost always guaranteed to be different. And the moment you agree to help them, you become responsible for them. Some sacrifices that are fine for you aren’t fine for them. They demand changes, and they want them right away. And just like that, you’ve become a miniature version of the IT department. A lack of fit This refers to a sense of knowing which tools and techniques to use to solve different kinds of problems. Without this awareness, citizen developers often attempt to use the same tool to fit every possible problem. In most cases, while the tool allows you to avoid coding like a developer, it doesn’t eliminate the need to think like one. Tool builders often work around this issue by using wizards. Wizards walk aspiring citizen developers through a series of questions and choices, then generate a ready-to-use (or at least ready-to-adjust) application for them. But wizards work because they impose countless constraints on available choices and on which kinds of problem they’re prepared to address. If constraints aren’t acceptable, you’ll need to think like a developer. This challenge cuts both ways: Professional developers, despite their best efforts to interview, discuss and negotiate requirements with users, still struggle with understanding the business task at hand and the desired outcomes of the applications they’re asked to build. A lack of consistency As citizen development is adopted, from an organizational perspective this isn’t about one application. It’s about two, five, 20, 50, or even hundreds of applications — and where each one is completely unique. Lack of consistency drives users crazy. It also increases training costs, slows down adoption, and makes every new application a one-off job with no ability to borrow from previous work. That drives management crazy. At the end of the day, it makes it much harder — if not impossible — for anyone who’s not the author to maintain the application. How IT can rethink citizen development Taking all this into account leads to two observations: First, difficulty with citizen development doesn’t directly equate to failure of LCNC tools. It’s the approach, not the tools, that can lead to problems. In truth, LCNC tool are used far more often by professionals than they are by amateurs. Second, just because the pendulum might have swung too far in one direction doesn’t mean it has to swing all the way back in the other. Citizen development is not easy, it’s not a panacea, and it’s certainly not magic. In fact, without the right organizational culture, it may not work at all. That said, the idea of citizen development has value, and the motivation behind it is real. A hybrid model of citizen- assisted development borrows from the best ideas behind citizen and professional development and melds them into something likely to succeed far more often. Citizen-assisted development assigns the right responsibility to the right people and keeps them working together the entire time. Stakeholders and users are assigned the task of creating something (perhaps using an LCNC tool) that illustrates what they want. Professional developers are assigned the responsibility for making those prototype examples into things that really work, considering exceptions that must be handled, integration that cannot be taken for granted, data access security, etc. Instead of citizen development giving birth to applications that get orphaned close after, it’s a shift to creating sustainable teams that are made up of innovators and curators working together. How does IT woo would-be citizen developers? By pivoting to soft power and ceding hard power, offering things of value, things that are hard to do when citizen developers attempt to go it alone. Things like: Lower-cost licenses. IT can purchase these at scale. Painstakingly researched and selected tools. IT has the time, expertise, and hopefully the will to do this, and its expertise can make sure that applications aren’t just easy to drag and drop but are also secure and governable. Education and training to bring the cost of acquiring knowledge down and significantly speed up the process. A curated library of resources, components, data sets, and other things that don’t have to be reinvented every time. Security matters here, but it ought to focus on right-to-know as opposed to need-to-know. Expertise with integration. Arguably, it’s hard for anyone other than IT to do this. Individual citizen developers in different departments are ill-equipped, and in some cases unwilling, to share data and APIs with each other. Deployment and update management. Backup/restore and other continuity-of service practices. Advocating design standards. This is true for user interface consistency, as well as techniques and resources. IT’s overall responsibility for applications — so citizens may remain responsible for business while the applications created for them are not getting orphaned. None of this is easy — but, ultimately, it’s lower risk and higher payoff. Mike Fitzmaurice is WEBCON ‘s Chief Evangelist and VP, North America and has more than 25 years of product, consulting, evangelism, engineering, and IT management expertise in workflow/business process automation, citizen development, and low-code/no-code solution platforms and strategies. His decade at Microsoft included birthing technical product management and developer evangelism for SharePoint products and technologies. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,992
2,022
"If Scrum isn’t agile, a SaaS solution could be | VentureBeat"
"https://venturebeat.com/2022/02/09/if-scrum-isnt-agile-a-saas-solution-could-be"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community If Scrum isn’t agile, a SaaS solution could be Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article is contributed by Nick Hodges, developer advocate at Rollbar. It’s been about 20 years since the Agile Manifesto was released. While the manifesto itself is great, many of us have been a touch misguided in our implementation of it. Let’s all be honest — how many of us have actually read the Agile Manifesto? If you haven’t read it, read it now. I’ll wait. Interesting, isn’t it? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Scrum seems to be by far the most popular way to “implement” agile development. There are scare quotes there because I’m not sure that Scrum can actually be considered agile. In fact, in a way, Scrum might actually be the exact opposite of what the Agile Manifesto is all about. And it’s time that we all admit that. That’s a pretty bold claim. Here are the three main reasons why this could be the case: Scrum creates false deadlines. Scrum can’t and doesn’t always deliver working software in one sprint. Scrum ceremonies create busywork. False deadlines Scrum has you break your work into “sprints,” usually a two-week chunk of work where a developer agrees to complete a given amount of effort (usually measured in hours or Story Points). At the end of every sprint, the developer is supposed to deliver working software — something that in theory could be delivered to the customer. Well, not every project fits into a two-week sprint. What if the work is actually three weeks long? That’s a conundrum. Scrum tells you to break the work down into two portions: one that is two weeks long and one that is one week long. Then, of course, you have to determine what to do with that extra week at the end of the second sprint because you need to deliver working software at the end of that sprint, too! Scrum creates all these deadlines and makes you fit your project into them whether that project likes it or not. And in my experience, most projects don’t like it. All too often, a project is a single entity — it either works or it doesn’t — and breaking it into deliverable chunks isn’t really possible. And even if you do break it down into pieces, those pieces cannot be considered functional software to be delivered to customers. Working software in two weeks? Delivering working software every two weeks makes a huge assumption — that work can be divided into deliverable, functional, two-week chunks. Sorry, but we all know that this simply isn’t the case. It just doesn’t work that way all the time. It’s like trying to fit a square peg into a round hole. Sometimes a project will take three months to complete. Creating false deadlines for that project every two weeks just doesn’t make sense. It creates overhead and false divisions of work that make things inefficient. An eight-week project is an eight-week project, not a 4 or 2-week project. Trying to make it so is simply wasted time and effort. Standing on ceremony Scrum dictates that you have a 15-minute daily stand-up, a one-hour backlog grooming session, and a one-hour sprint planning and retrospective session. Then, many teams even do a one- or two-hour sprint demo at the end of each sprint. If I have my math right, that’s at least eight hours of time every two weeks spent in meetings that — well, let’s be honest — don’t all need to happen. If those meetings run over — as they tend to do — that is more than a whole workday every two weeks. The backlog grooming session is typically spent trying to break large projects down into those two-week chunks. You get out the knife and try to cut that large piece of cheese into exact, equal pieces, two weeks long. You get it done, but really, some of those chunks are probably weirdly shaped. Then, you conduct those sprint planning meetings and take those weird chunks and fit them into the slots of your sprint. Maybe it makes sense how things work out — or maybe not. You also do a sprint retrospective to figure out what might be done better. And then every day you get together and tell each other how it is going with those weird chunks. At the end of those two weeks, you’re supposed to have something that you could ship to customers. But how often is that really true? I’m guessing it isn’t very often. Most often, you’ve taken the time to break a 10-week project into five artificial two-week slices, none of which are actually deliverable until they are all done. When you can’t be agile There are certain situations where a software development organization simply can’t be agile. The first is where there is a fixed deadline. If there is a deadline, you almost certainly can’t make adjustments that may be necessary to finish the project. Changes to what you are coding almost always mean a change to the date. What happens if, when you do the sprint review with the customer, the customer asks for changes? You can’t roll back a whole sprint, redo the work in a different way, and then still ship according to a contracted date. If you can’t adapt to changing requirements, then you aren’t being agile. The second is when there is a fixed cost. This works very similarly to what I described in the previous paragraph — if you are only getting paid a set amount to do a project, you can’t afford to redo portions of the work based on customer feedback. You would very likely end up losing money on the project. Bottom line: If you have a fixed deadline or a fixed amount of time, you can’t strictly follow Scrum, much less the Agile Principles. Hard to call it agile In the end, the Scrum methodology is hard to call agile. It usually doesn’t deliver working software as it says it should. It calls for a lot of time-consuming ceremony time. It creates busywork as you try to split up something that doesn’t really want to be split up. It’s a mystery to me how this system was thought to meet the principles of the Agile Manifesto in the first place. What will actually work Okay, so I’ve made some bold claims, but I’ve backed them up. So, here’s a solution. Your feature list will naturally divide into projects — projects that may vary in size and complexity. Embrace that. Accept that projects are different, individual endeavors. Then, do the following: Figure out your best estimate about how much work each project is. Put the projects in priority order, as estimated, on your schedule. You may have to play a little “Schedule Tetris,” but that will be doable. Assign a developer — or developers, if you think that will work better — and give them the specification. Set them on their way developing the project. Bother them as little as possible, but keep an eye on what is going on. Have daily standups if you want, but keep them short and on point — as in “Do you have any questions or problems?” and be fine if the answer is “Nope” and the meeting is two minutes long. (Also, have that meeting on Maker’s Time. ) When it makes sense, check in with the customer (whether that be the product manager or a real live customer) to see how things are going. Err on the side of doing this more rather than less. Make adjustments as necessary. Accept that things might take longer than you planned as a result. Deliver the project when it is done. (I’ll leave the discussion about what “done” means for a later time…) Meet and talk about how the project went well and could have gone better. Repeat this process — in serial and in parallel for as many projects as your team’s capacity can handle. At this point, I have to ask — which of the Agile Principles does this method not satisfy? I’m going to go with none. This system hits all the principles and will deliver software when it’s ready — not any sooner, and not any later. I don’t know what the writers of the Agile Manifesto think of Scrum. I just know what did and didn’t work for me, and Scrum never seemed to make sense. This plan always made sense to me: Figure out the project, do the project, and then deploy the project. Note that I’m not advocating for a return to the Waterfall methodology here — far from it. Checking in with the customer is a critical part of the process. Adjusting on the fly is still a critical part of the process. True agile I’ve proposed a way to do software development in an agile manner. It’s pretty straightforward. Can it actually be done? If so, under what circumstances are best? To answer these questions, I’ll make one more assertion: Being truly agile wasn’t really possible until one could do Continuous Integration (CI) and Continuous Delivery (CD) via a SaaS solution. Another bold statement, I know. But hear me out — this is pretty cool. The agile SaaS Model The typical SaaS application is a web front end with a backend API that serves up JSON. Because the vendor typically controls both sides of this equation, it can make changes to either whenever it wants. Or, put another way, the vendor can continuously deliver new functionality to the customer, whether it be daily (or hourly!) bug fix updates or features delivered immediately upon completion. Your code and your project are continuously improving. You no longer have to hold completed work “in inventory” waiting for the next big release to deliver value. Value is delivered as it is created. So, as a result, the following is not only possible but commonly done with modern software architectures: A project is defined and assigned to a developer (or developers). The developer(s) completes the project using the above methodology. They merge their code onto the main branch. The CI/CD system detects the merge, builds the new code, runs the tests (which pass), and then deploys the new code immediately. By utilizing a feature flag, the code is only visible to the few customers who have agreed to beta test the feature. The developer gets immediate feedback about the code from beta testers. If changes or adjustments are required, the code changes can be rolled back, changed, and redeployed. Repeat steps six and seven as necessary until everyone is satisfied. New functionality is delivered to the entire user base immediately by removing the feature flag. It’s really enabled by the SaaS model , which makes deployment simple and easy for every customer. Instead of having to wait and install a heavy client on every customer site, you can immediately deliver new functionality to all customers with the push of a button. Now that , my friends, is agile. Don’t follow an arduous plan or a strict process The Agile Manifesto was forward-looking. Perhaps too much so. And I believe it’s been pulled away from its original moorings. The Agile Manifesto explicitly states as its base premises: Individuals and interactions over processes and tools. Working software over comprehension documentation. Customer collaboration over contract negotiation. Responding to change over following a plan. I feel bad saying this, but it’s hard to see how any of those four statements are satisfied by Scrum. The first and the last are particularly germane, as nothing could be “following a plan” more than sticking to Scrum no matter what. And what is Scrum but a strict plan to follow? The Agile Manifesto foresaw what was to come and actually can be realized in the SaaS model. Perhaps it was just 20 years too early. Nick Hodges is a developer advocate at Rollbar. He is a seasoned development technologist with over 30 years of leadership experience and a passion for leading, developing, and mentoring diverse development teams. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,993
2,022
"Citizen developers are going to need a leader | VentureBeat"
"https://venturebeat.com/2022/04/05/citizen-developers-are-going-to-need-a-leader"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Citizen developers are going to need a leader Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. On a sunny day in June 1789, a crowd of peasants stormed a tower in France, desperate to secure gunpowder. They were fed up with not being able to vote under an out-of-touch king. What followed was known as the French Revolution. It toppled the monarchy, spread power to the people, unleashed chaos and ended in … another dictator. Nature abhors a vacuum and history teaches us that large, leaderless enterprises eventually appoint someone. (It’s called a “revolution” because it ends where it starts.) If nobody steps up, we tend to get bad leaders. This is top of mind because of the rise of so-called citizen developers, folks who don’t code but nevertheless build software thanks to no-code/low-code (LC/NC) tools like Microsoft Power Apps. It’s an exciting time. Tens of millions of additional people are now able to build software. But as someone who’s spent his career thinking about which methodologies and tools create high-quality software, I can assure you that empowering people to create things is not the same as ensuring they build effectively. I believe citizen developers are going to need a leader. The rise of the citizen developer The world population of citizen developers isn’t massive — just a few tens of millions, according to The Economist — but it’s growing at a blistering 40% year-over-year. That’s three times faster than the population of developers (25M) is growing. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In 2021 alone, Microsoft Power Apps, one of the best examples of democratized app creation suites, doubled in size to 10M users. Half of large insurers, many of them victims of legacy internal systems, are reportedly considering giving their entire company access to apps like this. No doubt, all these newly anointed “developers” are going to identify and address niche issues no central team would ever have noticed or prioritized. Free of central IT’s benevolent gaze, folks are free to build apps that make their lives easier, which will no doubt bleed into afterwork hours, flexing the company’s development capacity to unimaginable heights. But the warning signs are already clear. As The Economist reports, one employee at the Australian telecom firm Telstra created an app that unified 70 internal systems that’s used by 1,300 coworkers. The challenge? The interface presents users with an egregious 150 buttons and resembles a space shuttle control panel. Perhaps there’s a parallel to other creator platforms, like YouTube, TikTok, or Minecraft, where the vast majority of what’s created is low-quality, buggy, and enjoyed by few. I think it’s highly unlikely that individuals without an engineering background are going to think about interoperability, security or compliance , to say nothing of the interface. What might the sum of all these troublesome interfaces and user-generated apps create? What happens when these apps clash, overlap, conflict, and can overwrite each others’ data? Who maintains them, especially as the underlying systems each evolve? Who manages support requests? Does it eventually grow large enough for IT to inherit? Not unlike the siege of that fated Bastille in 1789, the people may acquire gunpowder. The question is whether they’ll know what to do with it. Citizen developers need two guides — one within and one without I started my career in software back in the early 2000s at a time like today, flush with new technology, rapid experimentation, and a feeling of limitless possibility. I was heavily influenced by the paradigm shift from heavy processes to lightweight ones. These were the days when the Agile Manifesto was written, unit-tests became an accepted practice, the gang of four’s Design Patterns was on everyone’s reading list, and some poor souls had to deal with Unified Modeling Language s. Part of what made the small group of people who defined that era so influential was that it was just a handful of leaders who you could identify, point to, and follow. They were also interested in seeing the industry develop, not just seeing any given software vendor win, so they could say anything in the pursuit of truth. Together, they had a profound impact on the people within companies who were actually building the software, or learning about it in school. I see that dynamic as a model for how leaders for citizen developers might emerge. I imagine there’ll be two classes: Agnostic industry innovators — public figures trying to solve the challenge of coordinating the work of millions of citizen developers. In my mind, it’s crucial that they be vendor agnostic so they can remain honest. Internal business engineers — a handful of architects within each company or business group who coordinate citizen development. They bring all the powerful tools and methodologies from software development to bear, to ensure all those federated apps interlock, and are secure, compliant, available, and friendly to use. They disseminate these methodologies and tools to others. The advisory firm Gartner strongly advocates hiring people who fit that second group. They might even sit outside IT, says Gartner, given how closely they’ll need to understand the business. If you empower these “business technologists,” you are reportedly 2.6x more likely to accelerate digital transformation. At Salto, we call these individuals “ business engineers ,” a compound moniker that conveys how important it is that they not just configure systems, but do so to benefit the company, and the individuals who use those systems. Whatever you call yours, I think every company that courts citizen development needs them. And whoever those agnostic industry innovators are today, I hope they start doing a lot more talks and provide the rest of us the methodologies and tools to guide us through this revolution. Citizen developers need leaders. Could that be you? The French Revolution ended in a second dictator — Napoleon Bonaparte. You don’t have to read much history to know that was not very benevolent and led the people into ten years of devastating war. When leaderless organizations don’t select their leaders, their leaders select themselves, and they tend not to be the people we want in charge. Amidst the rapid rise of citizen developers in your business, you have to ask, who’s going to lead them? I think it’s important to figure out now, before fate decides for you. History tells me you won’t be pleased with the result. Gil Hoffer is the CTO and cofounder of Salto. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,994
2,022
"Dispelling myths surrounding citizen developers | VentureBeat"
"https://venturebeat.com/2022/04/10/dispelling-myths-surrounding-citizen-developers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Dispelling myths surrounding citizen developers Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The term “citizen developer” has become increasingly common with companies accelerating their digital transformation efforts. These individuals hold various roles at organizations but share a common ambition: to conceive and build task-based apps that streamline work or improve operations in their business area. Through their insider knowledge, these employees are able to generate new web or mobile applications that solve specific business problems and speed daily work. Citizen developers typically use no-code or low-code software to build these apps. According to Gartner’s prediction , citizen developers will soon outnumber professional developers by a ratio of 4:1. Although these business analysts or business domain experts have no formal training in using development tools or writing code, they’re succeeding at creating valuable business applications. Gartner recommends that organizations embrace citizen developers to achieve strategic goals and remain competitive in an increasingly mobile business world. Despite the rise of citizen developers within organizations, many companies still dismiss the value and importance of citizen development. Let’s dispel some of the most common myths. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 1. Low-code applications can’t compete with enterprise-grade applications A common myth surrounding citizen development is that low-code applications cannot meet the requirements of enterprise-grade applications. Enterprise-grade applications are built to support consistent integration with other applications and the existing IT framework, with the term “enterprise-grade” being coined as IT became increasingly consumerized. Because low-code development delivers business apps without needing large amounts of programming, the longstanding belief is that low-code doesn’t have the capacity to meet enterprise standards. This is no longer true. Typically, citizen developers build low-code or no-code (LC/NC) apps for a specific business purpose, such as bridging gaps between systems or automating routine processes to improve team productivity. Often, limited scope, task-based apps are created by citizen developers, while large scope apps with complex security and data requirements are still produced by professional developers, using mainstream programming languages. Usually, LC/NC software comes with predesigned templates or drag-and-drop interfaces that consider best development practices, common enterprise requirements and routine IT practices. The software guides citizen developers to create needed apps quickly while adhering to the best app design and development practices. This allows more employees to make great mobile and cloud applications that speed business tasks, while minimizing risk to the organization. Because enterprise-grade applications are increasingly being designed to be scalable and robust across the environments they’re used in, the technicalities and predesigned nature of low-code development can match the required standards set by enterprise-grade apps. Thanks to low-code platforms, complete enterprise-grade applications can be developed within days, contributing to why company executive are increasingly making low-code development their most significant automation investment. 2. Alleged security risks that accompany citizen development Security is a vital component of any application. With security breaches on the rise and outcomes severe, like ransomware , addressing security issues must be of utmost importance to any organization considering citizen development. Data security is usually the responsibility of the IT departments, which identify and migrate any security risks as they develop apps. However, just because an application is developed by a citizen developer using LC/NC software tools, doesn’t mean there will necessarily be heightened security risks. According to recent forecasts, LC/NC applications will account for 65% of development activity within the next two years. To meet these enterprise expectations, most low-code platforms now come with built-in security features or code scans to enforce standard security practices. Vendors of LC/NC software tools now include a wide range of built-in security features, such as file monitoring, user control and code validation. While security features in LC/NC software are becoming more extensive, IT departments should make sure any development software used by the company has been vetted and adheres to company security policies. In addition, having an IT approval process for apps before they’re officially used could be a wise policy for IT teams to establish. 3. Citizen development creates shadow IT Another widespread myth about citizen development is the creation of shadow IT groups, outside the designated ones. This means application development can become unmanaged, ungoverned and of questionable quality. The reality can be very different. Many organizations struggle with low IT funding and resources. In these cases, citizen development can come to the rescue to provide rapid business solutions to meet rapidly changing business needs. The key to overcoming the risk of shadow IT in these situations is to establish strong governance and collaboration over the process. Instead of slowing the efforts of citizen developers, IT teams should encourage these new app creators by providing guidelines and resources for app creation that are in line with the best IT practices. One way is by sanctioning an approved LC/NC development tool. Some LC/NC platforms used by citizen developers are designed to eliminate technical complexity and provide complete transparency, control and governance, based on the users’ business needs. LC/NC platforms can also enable an environment of collaboration between citizen developers and the IT department, allowing the IT to maintain control over the development process. A second way to encourage citizen development is to introduce certifications and badges for citizen developers to celebrate app design or app development accomplishments. The true benefits of citizen developers Citizen developers can accelerate transformative efforts by using LC/NC software to build their own applications. Since citizen developers are usually employees in key areas within the organization, they are most aware of unique business needs, and thus, can develop mobile applications that specifically cater to the business. LC/NC software solutions provide virtually any of these employees with the ability to build mobile applications and, thereby, assist in the company’s transformation. The cost benefits are huge. Companies can introduce innovative apps, save work hours and attract more revenue. Companies can save significant money by not having to hire specialized developers or outsource app development projects. Additionally, citizen developers can use LC/NC software based on prebuilt modules that make software development many times faster than starting from square one. This reduces the time required to develop, design, test and deploy apps. Citizen development is neither just a fad to overpower IT teams, nor does it mean that employees will be left to themselves. IT departments can maintain a key huge role in providing adequate resources and supporting the company’s digital transformation efforts. The benefits of citizen development far overweigh the risks. However, business organizations must foster a collaborative effort between their citizen developer employees and IT departments to meet business needs and maintain competitive advantage. Instead of IT acting as a gatekeeper to technical innovation and digital transformation, IT teams should seek to empower citizen developers and work with them to solve business/technical problems. Amy Groden-Morrison is VP of marketing and sales operation for Alpha Software DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,995
2,022
"The AI in a jar | VentureBeat"
"https://venturebeat.com/2022/04/14/the-ai-in-a-jar"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The AI in a jar Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The “brain in a jar” is a thought experiment of a disembodied human brain living in a jar of sustenance. The thought experiment explores human conceptions of reality, mind, and consciousness. This article will explore a metaphysical argument against artificial intelligence on the grounds that a disembodied artificial intelligence, or a “brain” without a body, is incompatible with the nature of intelligence. The brain in a jar is a different inquiry than traditional questions about artificial intelligence. The brain in a jar asks whether thinking requires a thinker. The possibility of artificial intelligence primarily revolves around what is necessary to make a computer (or a computer program) intelligent. In this view, artificial intelligence is possible if we can understand intelligence and figure out how to program it into a computer. René Descartes The 17th-century French philosopher René Descartes deserves much blame for the brain in a jar. Descartes was combating materialism, which explains the world, and everything in it, as entirely made up of matter. Descartes separated the mind and body to create a neutral space to discuss nonmaterial substances like consciousness, the soul, and even God. This philosophy of the mind was named cartesian dualism. Dualism argues that the body and mind are not one thing but separate and opposite things made of different matter that inexplicitly interact. Descartes’s methodology to doubt everything, even his own body, in favor of his thoughts, to find something “indubitable,” which he could least doubt, to learn something about knowledge is doubtful. The result is an exhausted epistemological pursuit of understanding what we can know by manipulating metaphysics and what there is. This kind of solipsistic thinking is unwarranted but was not a personality disorder in the 17 th century. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! There is reason to sympathize with Descartes. Thinking about thinking has perplexed thinkers since the Enlightenment and spawned odd philosophies, theories, paradoxes, and superstitions. In many ways, dualism is no exception. The rise of behaviorism It wasn’t until the early 20 th century that dualism was legitimately challenged. So-called behaviorism argued that mental states could be reduced to physical states, which was nothing more than behavior. Aside from the reductionism that results from treating humans as behaviors, the issue with behaviorism is that it ignores mental phenomenon and explains the brain’s activity as producing a collection of behaviors that can only be observed. Concepts like thought, intelligence, feelings, beliefs, desires, and even hereditary and genetics are eliminated in favor of environmental stimuli and behavioral responses. Consequently, one can never use behaviorism to explain mental phenomena since the focus is on external observable behavior. Philosophers like to joke about two behaviorists evaluating their performance after sex: “It was great for you, how was it for me?” says one to the other. By concentrating on the observable behavior of the body and not the origin of the behavior in the brain, behaviorism became less and less a source of knowledge about intelligence. This is the reason why behaviorists fail to define intelligence. They believe there is nothing to it. Consider Alan Turing’s eponymous Turing Test. Turing dodges defining intelligence by saying that intelligence is as intelligence does. A jar passes the Turing Test if it fools another jar into believing it is behaving intelligently by responding to questions with responses that seem intelligent. Turing was a behaviorist. Behaviorism saw a decline in influence that directly resulted in the inability to explain intelligence. By the 1950s, behaviorism was largely discredited. The most important attack was delivered in 1959 by American linguist Noam Chomsky. Chomsky excoriated B.F. Skinner’s book Verbal Behavior. A review of B. F. Skinner’s Verbal Behavior is Chomsky’s most cited work, and despite the prosaic name, it has become better known than Skinner’s original work. The cognitive revolution Chomsky sparked a reorientation of psychology toward the brain dubbed the cognitive revolution. The revolution produced modern cognitive science, and functionalism became the new dominant theory of the mind. Functionalism views intelligence (i.e., mental phenomenon) as the brain’s functional organization where individuated functions like language and vision are understood by their causal roles. Unlike behaviorism, functionalism focuses on what the brain does and where brain function happens. However, functionalism is not interested in how something works or if it is made of the same material. It doesn’t care if the thing that thinks is a brain or if that brain has a body. If it functions like intelligence, it is intelligent like anything that tells time is a clock. It doesn’t matter what the clock is made of as long as it keeps time. The American philosopher and computer scientist Hilary Putnam evolved functionalism in Psychological Predicates with computational concepts to form computational functionalism. Computationalism, for short, views the mental world as grounded in a physical system (i.e., computer) using concepts such as information, computation (i.e., thinking), memory (i.e., storage) and feedback. Today, artificial intelligence research relies heavily on computational functionalism, where intelligence is organized by functions such as computer vision and natural language processing and explained in computational terms. Unfortunately, functions do not think. They are aspects of thought. The issue with functionalism—aside from the reductionism that results from treating thinking as a collection of functions (and humans as brains)—is that it ignores thinking. While the brain has localized functions with input–output pairs (e.g., perception) that can be represented as a physical system inside a computer, thinking is not a loose collection of localized functions. Critiques on computational functionalism John Searle’s famous Chinese Room thought experiment is one of the strongest attacks on computational functionalism. The former philosopher and professor at the University of California, Berkley, thought it impossible to build an intelligent computer because intelligence is a biological phenomenon that presupposes a thinker who has consciousness. This argument is counter to functionalism, which treats intelligence as realizable if anything can mimic the causal role of specific mental states with computational processes. The irony of the brain in a jar is that Descartes would not have considered “AI” thinking at all. Descartes was familiar with the automata and mechanical toys of the 17th century. However, the “I” in Descartes’s dictum “ I think, therefore I am,” treats the human mind as non-mechanical and non-computational. The “cogito” argument implies that for thought, there must also be a subject of that thought. While dualism seems to grant permission for the brain in a jar by eliminating the body, it also contradicts the claim that AI can ever think because any thinking would lack a subject of that thinking, and any intelligence would lack an intelligent being. Hubert Dreyfus explains how artificial intelligence inherited a “lemon” philosophy. The late professor of philosophy at the University of California, Berkeley, Dreyfus was influenced by phenomenology, which is the philosophy of conscious experience. The irony, Dreyfus explains, is that philosophers came out against many of the philosophical frameworks used by artificial intelligence at its inception, including behaviorism, functionalism and representationalism which all ignore embodiment. These frameworks are contradictory and incompatible with the biological brain and natural intelligence. The pragmatic philosophy of AI To be sure, the field of AI was born at an odd philosophical hour. This has largely inhibited progress to understand intelligence and what it means to be intelligent. Of course, the accomplishments within the field over the past seventy years also show that the discipline is not doomed. The reason is that the philosophy adopted most frequently by friends of artificial intelligence is pragmatism. Pragmatism is not a philosophy of the mind. It is a philosophy that focuses on practical solutions to problems like computer vision and natural language processing. The field has found shortcuts to solve problems that we misinterpret as intelligence primarily driven by our human tendency to project human quality onto inanimate objects. The failure of AI to understand, and ultimately solve intelligence, shows that metaphysics may be necessary for AI’s supposed destiny. However, pragmatism shows that metaphysics is not necessary for real-world problem-solving. This strange line of inquiry shows that real artificial intelligence could not be real unless the brain in a jar has legs, which spells doom for some arbitrary GitHub repository claiming artificial intelligence. It also spells doom for all businesses “doing AI” because, aside from the metaphysical nature is an ethical question that would be hard, if not impossible, to accomplish without declaring your computer’s power cord and mouse as parts of an intelligent being or animal experimentation required for attaching legs and arms to your computers. Rich Heimann is Chief AI Officer at Cybraics Inc, a fully managed cybersecurity company. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,996
2,021
"Report: 54% of today's 'ethical hackers' are Gen Z | VentureBeat"
"https://venturebeat.com/2021/11/20/report-54-of-todays-ethical-hackers-are-gen-z"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 54% of today’s ‘ethical hackers’ are Gen Z Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to a new report by Bugcrowd, 54% of today’s ethical hackers belong to Gen Z (individuals born between 1997 and 2012) and an additional 35% are millennials (born between 1981 and 1996). Younger than ever, this season of ethical hackers also represents the most ethnically diverse generation in history. This comprehensive annual report pulls back the curtain on ethical hackers to provide new insights into their backgrounds, lifestyles, skills, and motivations. The practice of ethical hacking helps root out security vulnerabilities, and in so doing it has become a mainstream vocation that allows diverse individuals to generate a sustainable livelihood from anywhere in the world. Ethical hackers live in 61 countries across six of the world’s seven continents. They aid organizations with identifying unfixed bugs throughout their infrastructure and software development lifecycles. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Despite the financial incentives for security researchers, more than half of them describe ethical hacking as work that they find intrinsically motivated. They say they do it to cultivate personal development, challenge themselves, seek excitement, and give back to the community. In fact, 86% of hackers think reporting a critical vulnerability is more important than making money. Seventy-four percent of respondents agree that vulnerabilities have increased since the onset of COVID-19. Notable highlights from the report include descriptions of whom these ethical hackers are, where their work is focused, and why more forward-looking companies are turning to bug bounty programs to continuously secure innovation and mitigate risks. The report analyzes survey responses and security research conducted on the Bugcrowd Platform from May 1, 2020, to August 31, 2021, in addition to millions of proprietary data points collected on vulnerabilities from 2,961 security programs. It also features the personal profiles of several ethical hackers who work on the Bugcrowd Platform. Read the full report by Bugcrowd. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,997
2,011
"Equinix launches data center marketplace | VentureBeat"
"https://venturebeat.com/2011/10/23/equinix-launches-data-center-marketplace"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Equinix launches data center marketplace Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Equinix has a lot of bandwidth for running web sites. The web hosting company has more than 99 data centers around the world that form much of the backbone of the internet. Today, it’s launching the Equinix Marketplace platform so that it can help the company’s more than 4,000 partners, customers and suppliers do business with each other more easily. You could think of it as a federation of housing contractors that all work with each other. But Redwood City, Calif.-based Equinix isn’t connecting a network of physical goods contractors. It’s hooking up companies that are in the business of buying and selling bandwidth and storage for web sites. “The network is aimed at transforming the data center into a revenue center,” said Jarrett Appleby, chief marketing officer of Equinix. “You could think of this like a service directory, a Yellow Pages for data centers.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The platform makes it possible for any company with a presence in an Equinix data center to quickly find and directly connect to others in the network in the name super-fast connectivity and creating new services. For instance, Bloomberg used the Equinix data centers to create regional services in new locations around the world. This may not be the sexiest market around. But it is a reminder that the internet is a physical place, existing as a series of interconnected data centers. If one service is closely connected to another one, the service level can be higher. Vendors who use the same web-hosting company can also trust each other more easily. And that is what Equinix is trying to make happen. Equinix created the platform in part because its partners were already connecting to each other. In 2010, interconnections among Equinix customers grew 27 percent. Equinix is already home to more than 700 software-as-a-service cloud providers, 675 backbone and mobile networks, 450 online media, content and ad sites, and 600 electronic trading and financial market participants. Equinix operates in 38 markets around the world. Rivals include Amazon.com. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,998
2,020
"The DeanBeat: My quest to defeat latency | VentureBeat"
"https://venturebeat.com/2020/09/18/the-deanbeat-my-quest-to-defeat-latency"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion The DeanBeat: My quest to defeat latency Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. During the pandemic, I’ve been playing games and clogging up the internet like a lot of other gamers. On the highest level, the internet has held up. Comcast reports that despite surges in demand, it has been able to keep up with our constant need to see TikTok videos, Netflix shows, and play Candy Crush Saga. But when it comes to hardcore multiplayer games like Call of Duty: Warzone , it’s been a haphazard time. I’ve been playing the battle royale game with mixed results. I have played 443 games of Warzone, and that puts me in the top 9% of players. But I’ve only won two games and came in the top 10 a total of 68 times. This puts me in the top 21%. I’ve killed 3,079 players and been killed 4,202 times, for a 0.73 kill/death ratio. This puts me in the top 27% of players. For all the time I have spent in this game — 5 days and 7 hours — I should be better. Naturally, I want to blame someone else besides myself. And my enemy is latency. Also known as lag. (OK, I admit I can’t really blame lag, but let’s discuss this for a while.) Latency is the time it takes a data signal to travel from one point on the internet to another point and then come back. This is measured in milliseconds (a thousandth of a second). If the lag is bad, then fast-action games don’t work well. Your frame rate can slow down to a crawl, or you can try to shoot someone and miss because, by the time you aim at a spot, the person is no longer there. So I went on a quest to figure out the problem. This happens relatively rarely in the game, but it happens. An online game might need only 150 kilobits a second to pass data back and forth, according to nonprofit CableLabs. But the ping rate (the time it takes to reach a destination on the internet) is far more important. You can test your ping rate on sites such as Meter.net. Mine comes out to 61.3 milliseconds on a server in Dallas with Meter.net, but it was just 11 milliseconds on a server in San Jose on Speedtest.net. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: I’ve only won two Warzone matches so far. If someone can shoot me faster than that, then I’m dead. The problem is that sometimes you get spikes in pings that disrupt your game, said CableLabs’ Barry Ferris in an interview with GamesBeat. He should know, as he and Matt Schmitt, the principal architect on the wired team at CableLabs, talked to 50 game companies about the latency problem. Games can be designed for latency compensation, but that works only when the latency is steady. It can’t compensate for ping spikes, Ferris said. And if you try to stream data upstream, like on a Twitch livestream, at the same time as you’re playing, you’ll make the network even more congested. I record my video, but I don’t livestream because that would be way too embarrassing. But many other people do. Is it my computer? Above: I’m tested out lag on a Razer Blade Pro 17 gaming laptop. I tried to see if my so-so performance in Warzone was due to my computer. For my desktop, I have a Falcon Northwest machine that is a year old and it has a good Nvidia GeForce 2080 Super graphics card in it. Most of my Warzone gameplay has been on this machine, which is wired into my Comcast router. I made sure it was wired because I didn’t want slower wireless results to mess up my experience. To see if a computer made a difference, I checked out the Origin PC EVO17-S laptop as well as a laptop from Razer. On the EVO17-S, Warzone ran at 114 frames per second on maximum settings at 1080p. It can also run Far Cry 5 at 99 frames per second, Shadow of the Tomb Raider at 94 frames per second, and Metro Exodus at 60 frames per second. It was fairly noisy, with ambient sound at 41.6dB. The EVO17-S has a 17.3-inch FHD 240Hz display. And it has an Intel Core i7-10875H CPU and Nvidia GeForce 2080 RTX Super with Max-Q and 8GB of video memory. It also had 16GB of Corsair Vengeance 2666MHz RAM. It sells for $2,941. Meanwhile, the Razer Blade Pro 17 sports a 17.3-inch screen with a 300Hz refresh rate (or 300 times a second). It has a 4K panel driven by Nvidia GeForce RTX 2080 Super graphics. The CPU is a 10th Gen Intel Core i7-10875H processor with eight cores and a base speed of 2.3GHz and turbo boost of 5.1GHz. This machine costs $2,600 and up. Above: The Origin PC EVO17-S laptop I’m testing out. These are the machines that should give me an edge with my ability to react to things happening on the screen. I actually won one of the two victories while playing on the Origin machine. But I encountered a tradeoff here. The fan was running so loud on the laptop, trying to cool the machine, that other players noticed. They asked what the loud sound was. I had to tell them it was the fan on my laptop. Sadly, I can’t say that the faster monitors on these laptops made any difference for me in getting victories in Warzone. I was glad to get the second victory, but I think it was due to the fact that I was playing with some Warzone badasses at Griffin Gaming Partners: Anthony “Stembo” Palma, James “Stvrgeon” Wing, and Pierre “PierrePressr” Planche. They created this highlight reel from the game. And here’s the match from my view , including chopping folks up with helicopter blades. I also started playing Microsoft Flight Simulator. This game streamed a lot of data from the cloud into my laptops. But it couldn’t quite keep up. At one point, I looked down while flying over the Bay Area. I saw the San Mateo Bridge stretching across the Bay. But it only went halfway, and then stopped. Because the data wasn’t streaming in fast enough, I saw the bridge stopped in the middle of the Bay, and beyond it were fuzzy details on the horizon. This is one reason why Jon Peddie Research predicted this game could spur billions of dollars in hardware spending. These laptops are very nice gaming computers, but I’m not so sure they help me win. Getting back to the milliseconds, I’d love to talk to more people about this. But Nvidia CEO Jensen Huang said during his recent event introducing new $700-plus GeForce 3080 graphics cards, that in the game Valorant, a sniper can watch at a gap between two walls. A player character passing by that gap can cross it in 180 milliseconds, or maybe a fifth of a second. He said a typical gamer has a reaction time of 150 milliseconds. That leaves only 30 milliseconds for any other delays in the network. If you use one of the new 360 hertz displays powered by the new graphics card, you can get back 50 milliseconds and have a better chance to shoot the player going past the crack in the wall if you use the new displays. Is it my router? Above: Netgear’s latest Nighthawk router fits in with your gaming gear. Just yesterday, Netgear announced a Nighthawk Pro Gaming XR1000 WiFi AX5400 Router that claims to reduce lag for console and PC gamers. Netgear product manager Max Wu said it can reduce ping rates (the time it takes to send out a signal and get it back) up to 93% in a congested network. Wu noted just how crowded our homes are getting — with an average of 15 or more devices connected to a network — and how it really gets bad when you have someone playing Warzone, another person watching Netflix, and somebody else on Zoom. If we all start using cloud gaming services like Microsoft’s xCloud, Google’s Stadia, and Nvidia GeForce Now, it will get worse. If your network is clogged already, these services will clog it even more, as it’s like sending a full Netflix stream down your pipes. Game streaming may use as much as tens of megabits of bandwidth per second, according to CableLabs, on top of needing low latency. “This is not like other traffic we’ve carried before, as it is both latency-sensitive and high data rates too,” Schmitt said. With the Netgear app for the router, you can tell your router not to be stupid. Because the internet was designed to function in the case of nuclear war, it sends traffic in hops to different nodes of the internet, picking up lag along the way. With the Netgear software, you can look at a dashboard map and tell your router not to seek out a router in Russia if you’re playing in the U.S. You can “geo-fence” your router so that it only seeks routers in an area that are close to you. You can also protect a gaming channel from your home to the internet, and put less priority on your sibling’s Netflix stream or other traffic. This and other features are why Netgear believes gamers will pay $350 for this router. “When I was a lot younger I was really into cars, and I see that people that are into PCs and competitive gaming are kind of like gearheads with cars,” Ben Acevedo, a gaming expert at Netgear, said in an interview. “Anything to get just that little extra inch, maybe it’s only one horsepower or one millisecond, you will go for it and chase those things down.” He added, “With cars, it’s the tires that grip the road. And with online gaming, it’s your modem and your router. That’s what grips the internet. That’s what gets you where you want to go.” Among gamers, players such as esports athletes would definitely pay any amount of money to get even a slight edge. Is it the internet? Above: Cox Enterprises is launching Elite Gamer. Here’s a little secret. Bandwidth doesn’t fix latency. If your download speed is 100 megabits a second, your ping rate may be a certain number. If you pay for more bandwidth at 400 megabits a second, your ping rate may stay the same. That’s because cable providers tend to charge based on bandwidth, rather than latency. You could download a PC game faster with more bandwidth. But it won’t play faster. A lot of unused fiber optic cable is out there — called dark fiber — but using it isn’t going to help that much, said Schmitt at CableLabs. But other things can. In June, Cox launched a new paid service called Elite Gamer for Cox subscribers at an extra cost of $5 a month. It’s only available in the Cox areas (6 million subscribers in 18 states), and Cox claims it can improve your response time as much as 32%. Cox provides you with software that downloads onto your machine and then identifies what you’re playing. It identifies the traffic pattern and then finds you other routes so that your game data travels the best path. It provides players with a dashboard so they can see the improvement. WTFast does something similar, though it is not tied to Cox’s network. The company charges a fee of $8.33 a month to gamers. It finds a better way to connect a user to the game servers, and so it sets up specific connections to servers for the game. I noticed that I was getting an improvement of 10 to 60 milliseconds, or an improvement of 10% to 40%, with a different result for each game session. WTFast only works with specific games, and so you have to see if it can fix your particular game. I’m not sure I always believed it was working, but the data reports were very detailed. WTFast actually works with Warzone, and during one moment, I felt like it made a difference. I was in Warzone in the aircraft Boneyard in the back of a plane. I started getting shot in the back by another player. I turned around and fired late. And I got the kill! The guy shouted “What the f…” before he died. I started laughing. Maybe my armor protected me and he missed. But he had the drop on me. Above: Subspace is solving internet traffic problems for games. Another company is called Network Next. Its technology measures the lag in a particular player’s game every 10 seconds. If it finds the delay is too long, it looks for alternate paths to speed the packets over the internet to the right destination. The company bids a price to move a user’s data at a certain speed, and the winning bidder gets the traffic. The bidders who can supply better traffic don’t know which application the user is running. The bidders only need to know which path the packets need to take and the bidders determine if they can do it at the right price. If Network Next succeeds in moving the game traffic faster than the public internet can, then the game developer pays Network Next a fee. This enables the developer to please gamers who may otherwise be unable to play because they have too much lag. Still another company called Subspace is building out a ghost internet, or a network of private servers that can be used by multiplayer gamers to bypass the bottlenecks on the internet. The company raised $26 million in April for this purpose. You can think of Subspace as a new kind of CDN for games. It deals with problems about why the internet, which was originally designed for redundancy in the case of a nuclear war, is screwed up. Internet packets have to hop from one kind of infrastructure, owned by one company, to another, owned by another company. Those handoffs take time, and routing isn’t as efficient as it is supposed to be. Software alone can’t solve the problem. Part of the solution is lighting up dark fiber, or unused fiber-optic networks, and Subspace has spent part of its money doing that in hundreds of cities around the world. Hundreds of billions of dollars have been invested in submarine cable systems and terrestrial fiber networks. But companies by and large don’t have control over the global network infrastructure. Subspace builds a map of the internet, finds the paths that are fast and routes the traffic. It’s like taking the playbook of high-frequency traders and rebuilding it for games. Is it my cable modem? Above: CableLabs is attacking lag for gamers. CableLabs recently announced that its latest software for cable modems, DOCSIS 3.1, has low-latency DOCSIS technology. This is a new approach to latency that targets a reduction in the round-trip response time to sub 5 millisecond ranges for applications. This means that webpages will load faster, video calls will be smoother, and yes, online gaming will be more responsive, based on my interview with CableLabs’ Schmitt and Ferris. Their research found that latency is what happens when data that should be moving fast gets stuck behind data that is moving slow. As with the Netgear router, the DOCSIS software will separate the types of data you have, whether it’s video streams, online games, or other traffic. The software separates these types of data into different queues, and it prioritizes the ones that need real-time interaction, such as games. A simple software update will implement the new tech, but cable companies have to adopt it first. CableLabs saw technologies such as GeForce Now coming, and that’s why it stepped in with a solution, Ferris said. Cable operators need to deploy equipment that supports the Low Latency, Low Loss, Scalable Throughput (L4S) technology to ensure the combination of high data rates and consistent low latency. “We’ve been working on ways to reduce latency on DOCSIS systems for years,” Schmitt said. “We hit that point where if you really want to get additional fundamental improvements in latency, you have to look at the specific applications you’re trying to reduce latency for. Online gaming was obvious. When all of this traffic is coming into your home for a game and for video streaming, it is sharing a single queue, a single stream of data. When you have multiple kinds of traffic, that’s when you have a problem. Add to that online education or Zoom, then you have more unexpected delays.” Above: Call of Duty: Warzone is one of Activision Blizzard’s big games. By separating the traffic into different queues, cable providers can reduce the latency. If game developers actually mark the packets as low or high priority, it becomes even easier for the cable companies to separate the traffic. That’s a lot of game developers who have to act on this issue, but fortunately, it isn’t that hard to do, Schmitt said. CableLabs also found that they can use Wi-Fi multimedia, a specification that goes back 14 years. It hasn’t been used, but it creates tiers on a Wi-Fi network, and it can put packets into different tiers based on packet marking. This reduces the latency over the Wi-Fi network. “We’ve been prepping the market on this, as it is part of an ecosystem,” Ferris said. “This solves a lot of the problem for online gaming.” Schmitt said that CableLabs’ member companies, the cable companies, have to adopt all these changes to make improvements to the network for gamers. Network Next is one of the companies that has adopted the packet marking. For normal online games, this may solve a “substantial” part of the latency problem, Schmitt said. “The good news is all of the different solutions wind up being complementary,” Ferris said. “Even Zoom has adopted packet marking. We think the latency-sensitive changes are going to create a significant improvement in the quality of experience for folks.” Of course, cloud gaming — as well as live events and Twitch streaming — is stressing the cable network even more as it starts to get wider adoption. Right now, cloud gaming is only around 5% of gaming traffic during the pandemic, even though it’s still in the early adoption stage. L4S could help with this, but it’s a lot more challenging to manage game streaming and find what its limits are without causing packet loss, Schmitt said. CableLabs has talked with the game-streaming companies about this, as the solution is more complicated than the latency solution for normal multiplayer games. All I can say is that when some of these changes go into effect sometime next year, then my victories in Warzone will come. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,999
2,022
"Equinix and Dell expand partnership for hyperconverged data center offerings | VentureBeat"
"https://venturebeat.com/2022/04/07/equinix-expands-its-hyperconverged-cloud-solutions-via-dell-partnership"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Equinix and Dell expand partnership for hyperconverged data center offerings Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Equinix has announced a major expansion to its Equinix Metal line of bare metal appliances with the release of several new offerings in partnership with Dell Technologies: Dell PowerStore on Equinix Metal : single-tenant Dell PowerStore all-flash arrays as a fully operated service. Dell VxRail on Equinix Metal: a data center-as-a-service experience that includes compute, storage, networking and integrated VMware. Dell EMC PowerProtect DDVE on Equinix Metal : a virtual appliance that runs on a choice of hardware or in the public cloud. Equinix adds data security and management capabilities as well as interconnection to private networks or any cloud platform. “Equinix Metal services provide the virtual data center facilities and capital assets’ scaffolding that’s required to fully realize the promise of hybrid cloud benefits on a global scale,” said Jeff Vogel, an analyst at Gartner. “Equinix colocation provides the hybrid cloud mooring that enables IT clients to transform their data centers and IT operating models into a cloud-based, as-a-services platform.” Competitive hyperconverged infrastructure market The hyperconverged infrastructure space has become highly competitive. It’s evolving into various as-a-service offerings such as data center as a service (DCaaS) and storage as a service (STaaS). Gartner projects heavy growth in this sector. The analyst predicts that by 2025, more than 40% of all on-premises IT storage administration and support costs will be replaced by managed STaaS, up from less than 5% in 2020; and by 2025, more than 70% of corporate enterprise-grade storage capacity will be deployed as consumption-based service offerings, up from less than 40% in 2020. “Many infrastructure and operations leaders are embracing cloud-based storage and its benefits as a replacement for owned, on-premises storage infrastructure,” said Vogel. “This trend is driven by the massive growth in enterprise data, the flexibility of cloud-based delivery models and the rise of remote workplace initiatives.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Underpinning all of this is a massive swing from on-premises to the cloud. At the end of 2020, 36% of global corporate enterprise-grade storage petabytes (PBs) were consumed in the cloud. Gartner forecasts this number to reach 59% by 2025, with enterprise storage being increasingly managed as part of a hybrid IT multi-cloud initiative. Further, managed consumption-based storage systems and hybrid IT are expected to host more than 70% of corporate enterprise storage workloads within three years. Vogel emphasizes the rise of hybrid IT where traditional services, public cloud services and private cloud services are combined to create a unified IT environment. In tandem, the traditional model of on-premises, external controller-based storage is gradually shrinking. On-premises storage arrays and appliances will remain for some mission-critical applications. But they are losing out steadily to cloud-based options that can more flexibly disaggregate pools of storage and compute, simplify maintenance, and offer automatic upgrades, good performance and quality of service, as well as built-in data protection (backup, DR and anti-ransomware). With growth rates like those cited by Gartner, Equinix Metal faces heavy competition. Existing STaaS and DCaaS players include Dell Technologies Project Apex, NetApp Keystone, Hitachi Vantara EverFlex, Hewlett Packard Enterprise (HPE) GreenLake, Pure Storage Pure-as-a-Service and Zadara Cloud Services. “The market is highly fragmented and is constantly changing, with both established and emerging vendors delivering differentiated value propositions and product capabilities,” said Vogel. The Equinix strategy Instead of taking these vendors on directly, the Equinix strategy is to partner with them and use its vast data center and bare metal backbone as a foundation for their offerings. “Weʼll be your trusted neutral as-a-service operator,” said Zac Smith, managing director of Equinix Metal. “We are happy to be valued for the as-a-service plumbing that we bring to the table: operational excellence, sustainability innovation, a suite of connectivity options, global reach, interconnected ecosystems and physical infrastructure automation.” These newer offerings build on Equinix’ traditional strength in the colocation market where it competes with companies like Digital Realty, Rackspace Technology, CyrusOne, Global Switch, QTS Realty Trust, Interxion and CenturyLink. With a $68 billion market capitalization, the company spans 220 data centers and 65 metropolitan areas on five continents. It has almost 10,000 staff and as many enterprise customers. Founded in Silicon Valley in 1998 as a vendor-neutral multi-tenant data center provider where competing networks could securely connect and share data traffic, its name is a combination of EQUality, Neutrality and Internet eXchange. It runs its operations on Platform Equinix, which has the goal of facilitating digital transformation. Beyond basic colocation services, though, Equinix has been steadily rolling out digital services like Metal, Network Edge and Fabric. These are designed to help customers move faster and deploy in more places. “While this overlaps slightly with low-level infrastructure offerings from the public clouds, we see these as-a-service digital infrastructure offerings as filling a gap in the market between colocation and cloud,” said Smith. In particular, Equinix differentiates itself via features such as automation and operational experience of cloud services, as well as the choice, control and performance of colocation. This is based on a vision of enabling customers to activate their choice of infrastructure, on demand, in minutes from a global ecosystem of providers. Its collaboration with Dell is a good example of this approach. It pairs the “operated and automated, make it just happen” experience of cloud with the “I want the enterprise hardware and technology experience from Dell.” “An important value of what Equinix Metal is offering is hybrid cloud infrastructure-as-a-service on a global scale,” said Vogel. “Equinix Metal partners with storage, compute and networking vendors to make this a reality. The real value behind this offering is that Equinix, in partnership with its vendors, is capitalizing infrastructure assets ahead of demand – very much cloud-like, solving what I refer to as the last mile issue for IT clients. That requires a bold, cloud-services vision that brings the vendor community and value-add services along with it.” Rather than directly compete, Equinix focuses on digital infrastructure plumbing that can be paired with specialized partner technologies. By delivering and managing the lowest layers of the stack as digital infrastructure building blocks and providing cloud onramps in a wide range of locations, the company is positioning itself as a neutral place where enterprises can procure, deliver, install, automate, operate and network their gear, whether it is generic compute or specialized Dell, Pure Storage, Nvidia or other solutions. Its global footprint enables Equinix to deliver low-latency access to various ecosystems. Equinix Metal and Dell This Equinix technology backbone offers a default Layer 3 network topology in combination with an API interface that gives a cloud-like experience to those taking advantage of its services – without the need for users to deal with any of the underlying IT plumbing, integration and interconnection steps. They gain access to solutions from top-tier technology providers, but without the supply chain and operational headaches of doing it all themselves. By providing users with a single-tenant infrastructure, they can control the underlying technology, choice of their preference of equipment, and independence from the data privacy, security and compliance challenges that impact multi-tenant approaches. “We sit at the i ntersection of physical and digital – helping customers put infrastructure from the right partners in the right places at the right time,” said Smith. “We’re advancing this vision with Dell Technologies by expanding our managed appliance portfolio to include Dell’s most popular storage, hyperconverged infrastructure, and data protection and management solutions.” Dell PowerStore on Equinix Metal Equinix Metal provides single-tenant Dell PowerStore all-flash arrays as a fully operated STaaS service. These Dell all-flash arrays are designed for any application or database and can be used for block or file storage. Equinix takes care of procuring equipment (bypassing supply chain constraints), installing, and maintaining of hardware, as well as management of the colocation, power, top of rack and networking. A company in one location, then, can order these services tailored to a specific location to minimize latency. The tendency toward overprovisioning of the data center is avoided as services can scale up or down as needed. But there is a minimum amount of capacity. Dell PowerStore on Equinix Metal requires a commitment of at least 25 TB of storage. It scales up to more than 2 PB per deployment. Dell VxRail on Equinix Metal DCaaS is available via Dell VxRail on Equinix Metal. It combines Equinix colocation, networking, operations and automation with Dell VxRail for reliability and performance. This, in effect, covers the entire DCaaS spectrum of compute, storage, networking and integrated VMware. This service comes in four flavors. The first is a general-purpose configuration. The three others are workload-specific configurations that are optimized either for compute for high performance, memory (for in-memory database workloads), or storage (for workloads needing fast scaling of storage capacity). Dell PowerProtect DDVE on Equinix Metal PowerProtect Data Domain Virtual Edition (DDVE) is Dell’s virtual appliance for managing enterprise data across any cloud (AWS, Azure, Google Cloud, AWS GovCloud, Azure Government Cloud, etc.) or on-prem environment. It includes deduplication, replication, and protection, and works with existing backup, archiving and enterprise applications. Dell PowerProtect DDVE on Equinix Metal integrates these Dell services for data security and management with Platform Equinix, where they can be connected to private environments, as well as any cloud platform, through streamlined onramps. The point of this pairing is to unlock the ability to use cloud services in combination with mission-critical enterprise data, while keeping full control. “Solutions like Equinix Metal are a great fit for organizations of all sizes that have cloud-first mandates, yet need a hybrid approach,” said Greg Schulz, an analyst for StorageIO Group. “They are also a great option for data protection including backup, restore, BC, DR, as well as a failure standby site.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,000
2,022
"Why USD could be the HTML for the metaverse, digital twins and more | VentureBeat"
"https://venturebeat.com/2022/03/24/why-usd-could-be-the-html-for-the-metaverse-digital-twins-and-more"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why USD could be the HTML for the metaverse, digital twins and more Share on Facebook Share on X Share on LinkedIn Universal scene description Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Follow VentureBeat’s ongoing coverage of Nvidia’s GTC 2022. >> The universal scene description (USD) language and format are rapidly being adopted as a Rosetta stone to translate data among 3D tools, game engines, digital twins and ecommerce offerings. It still has a way to go, but, it’s the best hope the industry has to unify workflows and user experiences across the metaverse. At the Nvidia GTC Conference, experts weighed in on USD’s history, current use cases, drawbacks and whether it could become the HTML of the 3D Web. When HTML was first introduced in 1993 it wasn’t a great thing. But it was the first serious effort to unify text, graphics and hyperlinks into a coherent interface, programming language and platform. It trumped the default approaches of the day like gopher and proprietary bulletin board services with funky fonts and poor layouts. And it was extensible across servers everywhere. USD is in the same position today. It isnt great at everything, but it does the best job among dozens of alternatives for sharing 3D assets across tools, services and platforms in an open and extensible way. USD is already helping companies like Ubisoft and Ikea simplify some 3D workflows and seeing traction with rendering engines like Unity and Unreal. That said, it still has a few limitations in rendering, cutting, and pasting models between worlds, materials, physics, and more sophisticated workflows. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Born at Pixar It is helpful to go back to the birth of USD to understand why it emerged and how early design decisions shaped its current state. Steve May, vice-president and CTO of Pixar, saw the predecessors to USD when the company was starting to make Toy Story. They were faced with problems describing scenes, lighting, cameras and other assets required to simplify workflows across large teams of artists and technicians. In the late 1990s, they experimented with concepts like referencing, layers and hierarchies, which form the basis for scaling production. In the mid-2000s, Pixar started incorporating these techniques into a new animation tool called Presto. Brave was the first film to use Presto. They discovered that the description language in Presto was expressive, but the performance was not there for the artist. “Brave was pretty complex, and even loading scenes into Presto was challenging,” May explained. Pixar created another form of scene description optimized for performance, so they decided to combine these. This led to the genesis of USD as it has become today. “At its core, it’s about how you describe complex scenes and worlds and how you let many people collaborate,” May said. USD describes the essential pieces of an environment, set, world or prop. It helps designers characterize each prop, shape, material, lights, and camera view used to describe the scene. When building large complex worlds, it’s difficult to break that problem into pieces that can be represented in layers. This allows multiple artists to work on the same scene without messing up anything. With layers, an artist can be posing a character while a set dresser might be moving around objects in the scene to make a better composition for the background set simultaneously. Then at the end, they can composite those layers to see the result. Guido Quaroni, senior director of engineering of 3D & immersive at Adobe, who also worked at Pixar in those early days, said they decided to open-source USD for strategic reasons. They wanted to build traction among the various tools Pixar used to reduce the risk of having to reengineer the Pixar pipeline to support new open-source alternatives like Alembic in the future. The fact that it took off also meant they did not need to invest internal resources to write integration plugins for DCC tools like Maya, Katana, and Houdini. Getting into details at Ubisoft USD is starting to gain traction for enterprises in other industries. For example, Ubisoft is beginning to use USD to simplify 3D asset exchange and integration across tools. Adeline Aubame, director of technology operations, Ubisoft, said, “We were interested in representing 3D data in a vast ecosystem while remaining performant.” At this first stage, USD simplifies import/export between DCC tools and rendering engines. This allows Ubisoft to hire talented artists who may not be experts in Ubisoft’s existing tools. However, her team is already struggling with places where USD lacks certain features they need for making video games. For example, level-of-detail is a popular rendering technique that selectively improves resolution for some parts of the scene. This focuses computing horsepower where it matters for gameplay. It is also essential to find ways that LOD can improve gradually rather than quickly pop into focus. Aubame hopes the game development community can find ways to add LOD support to USD workflows. “The great thing of an open format is that the power of many could help solve these problems,” she said Ikeas rendering challenge Martin Enthed, innovation manager at Ikea, has spearheaded efforts to perform offline 3D to replace photography for various marketing collateral. He fell in love with USD when he first saw it in 2016. Features like external references, overrides, and layering are all perfect for Ikea’s production pipeline. “But the problem we have is that it has not been adopted by the tools we are using,” Enthed said. They have, of course, been experimenting with it and testing how to store things in a 3D database and then push them back out. His interest piqued last year as USD picked up steam across content tool vendors. Now, it is still focusing on offline rendering use cases, where they generate content for a catalog or image carousel on a web page. He would like to explore more real-time and metaverse use cases. But the main problem is that USD does not go across rendering engines well. “Our main problem right now is the interoperability between renderers,” he said. One big challenge is variations in the material or surface description tools across vendors. When Ikea generates 3D content for something like a bookshelf or coffee table, it to ensure the surface looks realistic and similar whether someone is looking at it on a PlayStation, PC, mobile, or Qwest headset across different rendering engines. Enthed is skittish about investing too many resources in digitizing his entire content since most Ikea products have a long lifetime. Some current products date back to 1978. “I need to make a 3D asset today that can be hopefully loaded in 10 years,” Enthed said. “if we talk about HTML for 3D worlds, it needs to be possible to load in any 3d browser engine and look the same.” Planting seeds for the creatorverse Unity Editor was one of the first DCC tools to support USD in 2017. This helped Lion King virtual productions exchange content between Maya and Unity to generate 3D animation clips. Lately, Unity has been focusing on improving USD rendering performance, said Natalya Tatarchuk, distinguished technical fellow and chief architect, professional artistry & graphics innovation at Unity. One big challenge is improving rendering support across devices as varied as PlayStations, PCs, and mobile devices. Her team is also exploring different ways to standardize surface material formats that look good across devices. Tatarchuk said, “We need to enable people to author once but have that data durably flow across the diverse divergent render backends in a scalable way.” Her team is working with partners like Pixar, Nvidia, and Adobe to address these challenges. They are also expanding USD support to other tools like SpeedTree for organic rendering and Ziva for faces. The workflow dead ends are a big frustration in which USD works for some phases of the 3D development lifecycle but must be manually patched for others. These gaps are common across all main DCCs such as Maya, Blender, Houdini, and experience engines such as Unity and Unreal. All the major vendors will have to work together on USD to bridge these gaps. In some cases, this may require a leap of faith. “Without that leap of faith, we will always be waiting,” she said. Down the road, she hopes that this could help position the industry for what she calls the 3D “creatorverse.” This will mirror the kinds of 2D sharing and the explosion of users built on apps like YouTube, Snapchat, Instagram, and Tiktok. These tools make it easy to grab an image or video, transform it, and then share it with friends. “This is impossible to do in the creatorverse of real-time 3D,” she said. Another limitation is the lack of standards for describing interactivity. For example, how does a content creator describe how to interact with content and how it flows across time. The industry also needs to develop standards to describe procedurals characterizing elements like rigging and standardized animation curves. “This is super important, but no one is willing to compromise,” she said. “Everybody needs to come together and find some middle ground for some of these complicated aspects. We can find choices that may be imperfect in some contexts. Some ground is given, and some ground is gained because we need to be able to solve the missing pieces.” It is not possible to cut and paste a 3D character across applications today. The industry will need to agree on standards for procedurals, engine components, virtual cameras, lights, and controlling gameplay and behavior. Marc Petit, general manager for Unreal Engine at Epic Games, sees hope for new standards like glTF, which is a good solution for authoring and transporting 3D content. He also believes it’s essential to be able to drag 3D objects across worlds, such as allowing players to drive cars from Minecraft to Fortnight. In his role at Adobe, Quaroni wants to make it easier to share 3D content across various Adobe tools and services. “Ideally, they should be able to copy and paste between each other, but this is not easy because architecturally, there are different ways they represent the data,” he said. As a first step, he is working on improving lossless interoperability. Down the road, Quaroni is exploring how USD might be used to change the way people think about managing documents and files. He is also exploring how this approach could improve interoperability across 3D tools. Quaroni explained, “We are in the creatorverse space, creating assets for the metaverse. We need to assume it is not just Adobe’s tools. Let’s start thinking that way rather than trying to make everyone use our tools only. Ultimately the circle will come back to support our tools in this model.” Moving into digital twins Nvidia catalyzed recent interest in USD as it began to standardize support across its entire toolchain. Frank DeLise, vice president at Nvidia, said USD started as an internal problem to improve collaboration between humans, tools and AI to enable new digital twins use cases like simulating roads and factories. “We realized we needed an open standard to build these worlds on,” he said. None of the tools for new use cases like autonomous vehicles, robots and large virtual worlds could talk to each other. It was essential to allow anyone to contribute to these worlds and move assets across different views, whether rendered in Unreal, Unity, or web views. The USD format is picking up steam for exchanging data, but the USD runtime side of the equation is still missing. Nvidia is working with other leaders to figure out how to build a runtime version of USD to improve performance. “In the near term, many of these engines and viewers will have their own representations,” DeLise said. With the right architecture and industry collaboration, USD could spark the same growth in the metaverse that HTML initiated for the web. For example, Nvidia is gradually exploring ways to add functionality through microservices and connections to other tools. Nvidia has started open-sourcing various USD schemas such as PhysX for physics rendering and material definition language (MDL) for describing surface rendering. This also helps rethink the way to develop connections to other tools. For example, Nvidia can bring in AI tools through USD microservice connections. DeLise believes the 3D graphics industry is still in the early journey of connecting all the tools that support features that have become common on the web. “I think USD will be a great way to start describing that world, but there will be a lot of work to figure out to do those behaviors, microservices, and connections to get to that level,” DeLise said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,001
2,022
"Why glTF is the JPEG for the metaverse and digital twins | VentureBeat"
"https://venturebeat.com/2022/05/11/why-gltf-is-the-jpeg-for-the-metaverse-and-digital-twins"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why glTF is the JPEG for the metaverse and digital twins Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The JPEG file format played a crucial role in transitioning the web from a world of text to a visual experience through an open, efficient container for sharing images. Now, the graphics language transmission format (glTF) promises to do the same thing for 3D objects in the metaverse and digital twins. JPEG took advantage of various compression tricks to dramatically shrink images compared to other formats like GIF. The latest version of glTF similarly takes advantage of techniques for compressing both geometry of 3D objects and their textures. The glTF is already playing a pivotal role in ecommerce, as evidenced by Adobe’s push into the metaverse. VentureBeat talked to Neil Trevett, president of the Khronos Foundation that is stewarding the glTF standard, to find out more about what glTF means for enterprises. He is also the vice president of developer ecosystems at Nvidia, where his job is to make it easier for developers to use GPUs. He explains how glTF complements other digital twin and metaverse formats like universal scene description ( USD ), how to use it and where it’s headed. VentureBeat: What is glTF and how does it fit into the ecosystem of the metaverse and digital twins related sort of file formats? Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Neil Trevett : At Khronos, we put a lot of effort into 3D APIs like OpenGL, WebGL and Vulkan. We found that every application that uses 3D needs to import assets at some point or another. The glTF file format is widely adopted and very complementary to USD, which is becoming the standard for creation and authoring on platforms like Omniverse. USD is the place to be if you want to put multiple tools together in sophisticated pipelines and create very high-end content, including movies. That is why Nvidia is investing heavily in USD for the Omniverse ecosystem. On the other hand, glTF focuses on being efficient and easy to use as a delivery format. It is a lightweight, streamlined and easy to process format that any platform or device can use, down to and including web browsers on mobile phones. The tagline we use as an analogy is that “glTF is the JPEG of 3D.” It also complements the file formats used in authoring tools. For example, Adobe Photoshop uses PSD files for editing images. No professional photographer would edit JPEGs because a lot of the information has been lost. PSD files are more sophisticated than JPEGs and support multiple layers. However, you would not send a PSD file to my mom’s cellphone. You need JPEG to get it out to a billion devices as efficiently and quickly as possible. So, USD and glTF similarly complement each other. VentureBeat: How do you go from one to another? Trevett: It’s essential to have a seamless distillation process, from USD assets to glTF assets. Nvidia is investing in a glTF connector for Omniverse so we can seamlessly import and export glTF assets into and out of Omniverse. At the glTF working group at Khronos, we are happy that USD fulfills the industry’s needs for an authoring format because that is a huge amount of work. The goal is for glTF to be the perfect distillation target for USD to support pervasive deployment. An authoring format and a delivery format have quite different design imperatives. The design of USD is all about flexibility. This helps compose things to make a movie or a VR environment. If you want to bring in another asset and blend it with the existing scene, you must retain all the design information. And you want everything at ground truth levels of resolution and quality. The design of a transmission format is different. For example, with glTF, the vertex information is not very flexible for reauthoring. But it’s transmitted in precisely the form that the GPU needs to run that geometry as efficiently as possible through a 3D API like WebGL or Vulkan. So, glTF puts a lot of design effort into compression to reduce download times. For example, Google has contributed their Draco 3D mesh compression technology and Binomial has contributed their Basis universal texture compression technology. We are also beginning to put a lot of effort into level of detail (LOD) management, so you can very efficiently download models. Distillation helps go from one file format to the other. A large part of it is stripping out the design and authoring information you no longer need. But you don’t want to reduce the visual quality unless you really have to. With glTF, you can retain the visual fidelity, but you also have the choice to compress things down when you are aiming at low-bandwidth deployment. VentureBeat: How much smaller can you make it without losing too much fidelity? Trevett: It’s like JPEG, where you have a dial for increasing compression with an acceptable loss of image quality, only glTF has the same thing for both geometry and texture compression. If it’s a geometry-intensive CAD model, the geometry will be the bulk of the data. But if it is more of a consumer-oriented model, the texture data can be much larger than the geometry. With Draco, shrinking data by five to 10 times is reasonable without any significant drop in quality. There is something similar for texture too. Another factor is the amount of memory it takes, which is a precious resource in mobile phones. Before we implemented Binomial compression in glTF, people were sending JPEGs, which is great because they are relatively small. But the process of unpacking this into a full-sized texture can take hundreds of megabytes for even a simple model, which can hurt the power and performance of a mobile phone. The glTF textures allow you to take a JPEG-sized super compressed texture and immediately unpack it into a GPU native texture, so it never grows to full size. As a result, you reduce both data transmission and memory required by 5-10 times. That can help if you’re downloading assets into a browser on a cell phone. VentureBeat: How do people efficiently represent the textures of 3D objects? Trevett: Well, there are two basic classes of texture. One of the most common is just image-based textures, such as mapping a logo image onto a t-shirt. The other is procedural texture, where you generate a pattern, like marble, wood, or stone, just by running an algorithm. There are several algorithms you can use. For example, Allegorithmic, which Adobe recently acquired, pioneered an interesting technique to generate textures now used in Adobe Substance Designer. You often make this texture into an image because it’s easier to process on client devices. Once you have a texture, you can do more to it than just slapping it on the model like a piece of wrapping paper. You can use those texture images to drive a more sophisticated material appearance. For example, physically based rendered (PBR) materials are where you try and take it as far as you can emulate the characteristics of real-world materials. Is it metallic, which makes it look shiny? Is it translucent? Does it refract light? Some of the more sophisticated PBR algorithms can use up to 5 or 6 different texture maps feeding in parameters characterizing how shiny or translucent it is. VentureBeat: How has glTF progressed on the scene graph side to represent the relationships within objects, such as how car wheels might spin or connect multiple things? Trevett: This is an area where USD is a long way ahead of glTF. Most glTF use cases have been satisfied by a single asset in a single asset file up till now. 3D commerce is a leading use case where you want to bring up a chair and drop it into your living room like Ikea. That is a single glTF asset and many of the use cases have been satisfied with that. As we move towards the metaverse and VR and AR, people want to create scenes with multiple assets for deployment. An active area being discussed in the working group is how we best implement multi glTF scenes and assets and how we link them. It will not be as sophisticated as USD since the focus is on transmission and delivery rather than authoring. But glTF will have something to enable multi-asset composition and linking in the next 12 to 18 months. VentureBeat: How will glTF evolve to support more metaverse and digital twins use cases? Trevett: We need to start bringing in things beyond just the physical appearance. We have geometry, textures and animations today in glTF 2.0. The current glTF does not say anything about physical properties, sounds, or interactions. I think a lot of the next generation of extensions for glTF will put in those kinds of behavior and properties. The industry is kind of deciding right now that it’s going to be USD and glTF going forward. Although there are older formats like OBJ, they are beginning to show their age. There are popular formats like FBX that are proprietary. USD is an open-source project and glTF is an open standard. People can participate in both ecosystems and help evolve them to meet their customer and market needs. I think both formats are going to kind of evolve side by side. Now the goal is to keep them aligned and keep this efficient distillation process between the two. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,002
2,022
"Despite metaverse buzz, 60% of consumers have zero interest in virtual shopping | VentureBeat"
"https://venturebeat.com/2022/05/30/despite-metaverse-buzz-60-of-consumers-have-zero-interest-in-virtual-shopping"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Despite metaverse buzz, 60% of consumers have zero interest in virtual shopping Share on Facebook Share on X Share on LinkedIn While buzz has surrounded the supposed infinite potential of the metaverse and AR/VR technology as tools for future online marketplaces, there has also been a decline in revenue for several ecommerce companies in recent years, leading some organizations to go back to the drawing board when it comes to good digital CX. A new report from Productsup has surveyed consumers’ tastes and expectations when it comes to digital hybrid shopping experiences, with a particular focus on sustainability and the metaverse. For many companies looking to boost sales in the digital marketplace, the results illustrate an uphill battle: according to the report, 60% of shoppers have zero interest in buying virtual goods whatsoever. With revenue from the metaverse expected to reach $800 billion in 2024, it’s no wonder that forward-thinking organizations might be eager to cater to customers who aren’t quite yet interested in online-only spending. Overall, the results from Productsup’s report indicate that customers are chiefly keen on digital CX that offers transparency, accessibility and availability. In the past decade, sustainability and DEI initiatives have risen to the forefront of consumers’ minds; as they decide on whether to purchase a company’s product, they’re more and more likely to inquire about the why and how a said product is made. Consumers tend to avoid products that’ll end up in a landfill, and instead prefer ones that are reusable (71%) or recyclable (70%). Despite this, consumers say information on a product’s reusability (34%) and recyclability (30%) can be difficult to find. It’s no longer enough to include a “fair trade” or “biodegradable” label on your paper coffee cups, for example — not only do 43% of consumers want a detailed explanation as to how the product is biodegradable, but 40% also want information that proves that the product aligns with its “sustainable” label. “Consumers aren’t distracted by ‘greenwashing,'” said Lisette Huyskamp, chief marketing officer at Productsup. “[Their] expectations can’t be met unless product information is managed with a strong P2C [product-to-consumer] strategy.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! While consumers across all generations want more product information, how best to present said information depends on each generation. Gen Z welcomes the advent of the metaverse and digital-only shopping much more readily than their older counterparts. Similarly, Gen Z is much more likely to prefer information that’s presented via online comparisons (40%) or QR codes (37%). On the other end of the generational spectrum, those 55 years or older tend to prefer information that’s easy to find and contained within the product description itself. Finally, customers tend not to want an “either/or” shopping experience; i.e., they want access to product information and deals that are accessible in both the metaverse and the store. Roughly an equal amount of consumers have indicated they’re more likely to buy a product if a deal is offered exclusively in a store vs. online (55% vs. 54% respectively), meaning that companies should offer coupons and sales in both physical and digital venues. Technology that blends physical and digital shopping is also welcomed: 47% of consumers would make a purchase if they could access product information via a store’s mobile app while they’re shopping in-person, for example. The use of augmented reality (AR) technology, such as smart mirrors and mobile filters, could also be used to motivate consumers at the store (41%) or on the company’s website (42%). All in all, the results indicate that while many consumers are looking forward to the expected increases in speed, convenience and information offered by the metaverse and other digital marketplaces, they’re not quite yet willing to abandon the tried-and-true methods of decades past. “In today’s commerce world, brands and retailers need to deliver nuanced experiences tailored to consumers wherever they shop,” said Huyskamp. Productsup’s report is based on a survey of nearly 5,700 consumers age 16 and up across the U.S. and Europe, asking about their preferences, expectations and behavior toward hybrid shopping experiences. Read the full report by Productsup. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,003
2,022
"How manufacturing companies can use digital twins to remain competitive | VentureBeat"
"https://venturebeat.com/2022/05/31/how-physical-product-focused-companies-can-use-digital-twins-to-remain-competitive"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How manufacturing companies can use digital twins to remain competitive Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Companies that make physical products sometimes struggle to stay relevant as digital natives and find creative ways to capture the highest margin fringes of age-old businesses. One specific challenge these companies face is that digital native businesses have developed advanced data processing capabilities to create better customer experiences and identify new opportunities. This is much harder for established physical goods industries, which rely on legacy systems and manufacturing equipment. Digital twins could help bridge this gap between legacy systems and modern customer experiences, Michael Carroll, VP at Georgia-Pacific, predicted at the Digital Twin Summit. Carroll leads corporate transformation strategy development at the paper and forest products giant. He argues that physical products industries don’t have suitable mechanisms for dealing with the exponential growth in data. Most business leaders he talks to know that data is growing, but they take a linear rather than exponential perspective. This limits the ability to capture value from new data streams like the IoT, ecommerce services, manufacturing equipment and customer interactions. The permission bottleneck Business leaders also face the challenge of implementing a permission-based approach to help integrate information technology (IT) and operational technology (OT) used for managing physical machines. To do so, business and engineering teams must ask the IT department for access to digital representations of the assets they manage. Then, the IT department needs to ask for permission to get more data from the physical assets. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We end up in a permission asking cycle in a world that is growing exponentially,” Carroll said. He observed that in the mid-1970s, the bulk of the S&P 500 was made of companies whose tangible assets made up 85% of their value. But today, the balance between tangible assets like goods created in factories and intangible assets like brands and experiences is reversed. The leading companies are systems-based rather than functions and process-based companies. They have created connected ecosystems that generate, aggregate and analyze customer, market and supplier information. As a result, they understand what their customer wants before competitors do. The exponential model Established businesses need to take a similar approach that extends these traditional tools to support digital twins of real-world goods, manufacturing processes and marketplaces. To do this at scale, the IT organization needs to plan a more self-service and democratized approach to provision, update and leverage digital twins. “This means that in order to create value at the rate that data grows, which is exponential, you might have to reconstruct yourself so that you don’t have to ask permission to go create value,” Carroll said. This allows business executives and operations teams to stand up new devices, create new applications or change configurations on their own. “Now they are responsible for the digital representation of the thing they are in charge of,” he said. This new approach could also allow enterprises to create digital twins powered by artificial intelligence (AI) to understand and respond to customer values and decisions. “We do not know a lot of the answers, except to say that we’re pretty sure that tomorrow is about creating value in the exponential age and creating value at scale, with data growing exponentially,” Carroll said. “ Digital twins will be a huge part of that, and it will be powered by AI.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,004
2,021
"Report: 85% of consumers rethink purchases from companies that lack focus on climate and diversity | VentureBeat"
"https://venturebeat.com/2021/10/29/report-85-of-consumers-rethink-purchases-from-companies-that-lack-focus-on-climate-and-diversity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 85% of consumers rethink purchases from companies that lack focus on climate and diversity Share on Facebook Share on X Share on LinkedIn Female farm worker using digital tablet with virtual reality artificial intelligence (AI) for analyzing plant disease in sugarcane agriculture fields. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to a new global report by Exasol, 85% of consumers have changed their minds about purchasing from a company because they felt it did not do enough to properly address climate change. Fifty-four percent of corporate social responsibility (CSR) decision-makers share a similar mentality, believing companies that fail to act on “going green” and other sustainability initiatives will no longer exist in ten years. The report indicates that consumers no longer instinctively trust the words of companies from which they have previously purchased goods or services. Instead, businesses need to demonstrate efforts towards key initiatives before consumers reach for their wallets. A substantial majority (68%) of consumers will consider demanding data-backed evidence to prove that companies are making beneficial steps towards addressing global warming, diversity and inclusion (DEI ), as well as ethical and sustainable business practices in the next 36 months. These initiatives are also becoming influential for consumers’ decision-making, as over 86% of respondents have indicated that they will decide whether or not to do business with a company based on its credentials with climate change, DEI initiatives, and ethical and sustainable practices. The latter is also cited by 88% of consumers as a key factor when making purchases. Furthermore, there appears to be a hard deadline as to when businesses can improve their practices and credentials: 66% revealed they would cease buying from a company that didn’t have definitive plans to work on these initiatives within the next three years. Despite consumers’ increasing demand for visible corporate efforts to fight against climate change and a lack of workplace diversity, there appears to be a startling minority of businesses that have either enacted plans or hope to enact plans soon to address these issues. Only 42% of corporations, for example, have a “fully-formed roadmap” in place to ensure climate-friendly business practices are initiated within the next three years, while 31% of those who have no current plans to address these issues still had zero strategies to do so within the next year. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, there is growing acknowledgment within corporations that something must be done before customers stray, as well as a growing desire for data in order to inform corporate decisions regarding these crucial initiatives. Eighty-two percent of CSR decision-makers agree that better choices could be made to improve businesses’ climate change, DEI, and ethical and sustainable practices if corporations were given greater access to data-led insights , even while only 22% of CSR respondents appear to be using all of the data available to them. With these results, it is clear that data is becoming a critical resource for businesses and consumers alike as consumer culture pivots in support of various societal issues. Increased accessibility to data should become a basic requirement for many businesses in the next three years. The consumer survey was conducted among 8,056 employees of companies with over 500 employees that have CSR, ESG, or DEI programs in the U.S., U.K., Germany, China, South Africa, and Australia. The CSR decision-maker survey was conducted among 716 CSR decision-makers in the same regions and in companies of a similar size. Read the full report by Exasol. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,005
2,022
"The 3 pillars of the retail metaverse | VentureBeat"
"https://venturebeat.com/2022/02/10/the-3-pillars-of-the-retail-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The 3 pillars of the retail metaverse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Will Hayes, CEO of Lucidworks. Adidas , L’Oreal , and even Martha Stewart have already jumped into the metaverse. Retailers see “metaverse” and flashback to twenty years ago when e-commerce kicked off. They prefer not to be the ones launching their first HTML site while the competition is launching their ecommerce platform. The metaverse introduces a new platform, new currency, new opportunities, and new ways to fail. Here’s how brands can build for the metaverse and catch the wave before it crushes them. Grow from your physical store The metaverse feels intangible right now, in the same way that a website felt tenuous in the ’90s — and we all know how that ended. Ecommerce, including selling on social channels, became the gold standard and an entire industry of technology and consultants grew to support it, paving the way for D2C retailers who abandoned physical channels altogether. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Retailers that have used physical locations to differentiate their experiences have struggled to find a way to bring this to the digital world. One of the most well-known examples of this is Sears. Once a leading department store that anchored malls all over the country, Sears failed to evolve to omnichannel and was crushed by department stores including Nordstrom and Kohls. In the metaverse, these retailers with a great understanding of how to build experiences and foster community in-person could have a leg up. Why? Retailers should be expanding from their physical stores into the metaverse, not leading with digital. Physical spaces work because we’re physical humans. Entire teams of merchandisers and designers are built on the idea that how we experience things as we move through real, physical life matters. As a consumer, it makes me feel something, makes me buy more, makes me come back — and in the metaverse, I can do it all without getting off the couch. That expertise should be what retailers leverage in the digital metaverse universe. Many retailers will fall into the trap of leading with their teams of engineers to create the strategy for their metaverse experience. Don’t recreate your website on the metaverse; be intentional about using the metaverse to enrich the customer experience. Five years down the line, there will likely be teams of consultants and developers who can help build the metaverse experience. For now, tap into the knowledge of the people on your team now who understand how to translate brand values and a sense of community into physical spaces to help build the metaverse. People matter more than ever; don’t overlook the expertise and understanding of people who are creating in-person experiences. Be intentional in the retail metaverse In many ways, the metaverse isn’t totally net-new; it’s a combination of things that have already happened. The first idea of the metaverse dates back twenty years. Many ecommerce platforms already offer some of the AR and VR features that will be part of the retail metaverse. Trying on makeup without it ever touching your skin? Sephora already did that. Entering room dimensions to see how you can organize furniture? There’s a long list of apps that help you imagine your space. Watching a store associate or ambassador hawk goods in real time? Livestream shopping boomed during the pandemic. So what’s different about the metaverse? Instead of watching things on a screen, you’re stepping into an entire world; the metaverse blends all of these possibilities in one place. Intentional creation is key. Intentionality must drive the three key pillars that retailers build into their metaverse: community, experience, and engagement. It’s not hard to spit out a one-off VR game (like I said earlier, a lot of that has already been done). Focus resources on building a metaverse that fosters community, enriches customers experience with the brand, and engages shoppers to build loyalty. 1. Be intentional with community Brands like Peleton and REI have brought a sense of belonging and community to a hybrid world. They connect the “average consumer” with likable and accessible experts in the form of trainers and store employees. Before you spin up another VR game, set the intention for building another customer experience. Brands like Lululemon who have been cultivating community through in-store classes and a shared bond over the love of yoga may already have clear intentions around what they want their community to be. Big box stores like pharmacies and grocery chains will have to dig a little deeper. Here’s a checklist of questions to help get the conversation going. Retailers should use these questions to help define the intention around the community before they start spinning up their new world: What type of world do I want my shoppers to step into? Think about everything that goes into designing your real (or imagined) physical store; how does it smell ? Where do people gather? Are there sales associates? Am I welcoming people into a showroom or onto a factory floor? How is the experience of my customers enriched by the metaverse? If customers adore livestream shopping events, how am I using the metaverse to enrich that experience? As opposed to ticking a box or providing the exact same thing on another platform? Am I creating a new type of forum for enthusiasts or casual browsers to connect? Why build community at all? Is it to retain loyal customers? Increase average order value? Purely to delight shoppers? These questions matter. And if you can’t answer them, you could fall victim to creating a hodgepodge experience. 2. Be intentional with experience Retail in the metaverse is a unique opportunity to access and envision products in a way that wasn’t previously possible. When you put on VR goggles, you not only get to see what your room would look like if you bought that pink velvet couch from CB2, you also get to sit down in the room and experience how it all fits together. The metaverse allows customers to experience physical goods in the virtual world, even recreating virtual representations of their backyard, house, car, or body to get an immersive experience before purchasing art, furniture, or clothing. Retailers must avoid the “laundry list” mentality when they’re building out the metaverse experience. Again, be intentional. One size doesn’t fit all in the metaverse. For example, both Porsche and Kia want to sell cars. But because of the type of customer they have, the intention behind the experience will be different. Porsche enthusiasts who are doggedly following online car auctions and can tell you engine specs for the 911 made in 2006 are going to want to step onto the factory floor and see how Guards Red paint is formulated. Kia shoppers may not be as interested in those details. The list of questions retailers need to ask themselves is much shorter than community building. “How is the metaverse enhancing the customer experience?” If it’s an add-on, or it’s replicating something that already exists on Instagram Live or YouTube, retailers are wasting their resources. The metaverse can enhance that personalized experience by offering a live-stream shopping event where customers can sit next to a brand ambassador, and then immediately be able to step into a virtual dressing room where they can try something on, add it to their cart, and check out. A large component of experience building will be figuring out how to incorporate business channels, which should be dictated by the type of experience that’s being built. Once retailers have a clear intention around the vibe of their universe, it’s time to consider how money flows in. 3. Be intentional with engagement For the brands that have intentionally built a space that enhances the customer experience and puts intention behind the creation of a virtual community (and hopefully, they’ve got a VR room full of shoppers), what now? There are two big trends happening that retailers should capitalize on. Welcome to the world of nonfungible tokens Building community and guiding customers through an intentional experience is great — and arguably one of the biggest draws for consumers — but it doesn’t drive dollars. For consumers that aren’t familiar with cryptocurrency, nonfungible tokens ( NFTs ), or digital wallets, this presents a big barrier to access. Retailers must make it easy for consumers to understand and set up their presence in Web 3.0. Commerce companies should tap commerce platform experts to help design the experience to guide customers from the familiar e-commerce site to the new 3D cryptoverse. Brands have chosen different ways to deploy NFTs. For example, Martha Stewart recently announced Martha Stewart Mint, where you can click on the digital record you’d like to buy and it guides you to a product description and FAQs about how the heck this all works. L’Oreal recently released their first NFT series by leading female, digitally-native artists inspired by a lipstick collection, centered on female empowerment. The path to OpenSea , where the majority of auctions take place, isn’t as clear as Martha’s. For a crypto newbie, it could be too confusing to pursue. When incorporating NFTs as a buying channel, consider the initial intention that was established around community and experience. Retailers should consider how they can connect digital and physical channels. Consider incorporating a physical product drop alongside an NFT purchase. Adidas did this in their drop in December, using Twitter and Discord to promote the upcoming sale. By offering NFTs and a physical product, they can span both channels seamlessly. Personalized experiences bridged from ecommerce to the retail metaverse Where appropriate, ecommerce retailers should expand digital experiences to concierge-like services where shoppers can build connections with other niche groups and brand ambassadors. AI-powered chatbots do some of this work in ecommerce because they’re cheaper to staff than people, they’re effective if they work, and they’re there to be a one-to-one personal concierge experience. That capability isn’t (yet) available in the metaverse, so brands should consider gating content or events based on consumer behavior in other platforms, like on the website and social. Most retailers worth their salt are already collecting customer signals on ecommerce platforms. They understand customer preferences, price threshold, and buying behavior. Retailers can capture signals from ecommerce and extend gated experiences to specific shopper cohorts. Digital merchandisers play a key role here in creating promotions for trending products, where a purchase gets shoppers access to gated content in the metaverse. Retailers should consider the metaverse as a new selling, marketing, and engagement channel to be incorporated into the omnichannel experience. The retail metaverse: Retail in 3D The intersection of unique brand experiences and smart adaptive technology will reach its heights in the metaverse. If retailers remember nothing else, they should remember this: “People first.” It sounds like an oxymoron when you’re talking about escaping into 2D and 3D worlds behind a screen, but the success of the three pillars I’ve shared above — community, experience, and engagement — all hinge on a brand’s ability to bring real, human customers along on this digital journey. Will Hayes is the CEO of Lucidworks. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,006
2,022
"Productsup nabs $70M to help brands sell better | VentureBeat"
"https://venturebeat.com/2022/04/06/productsup-nabs-70m-to-help-brands-sell-better"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Productsup nabs $70M to help brands sell better Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Berlin-based Productsup , a company that provides a software- as-a-service (SaaS) platform to help brands, retailers and marketplaces stay on top of product-to-consumer information value chains and win big in the complex omnichannel commerce landscape, today announced it has raised $70 million in a series B round of funding. While modern-day commerce seems to revolve around Amazon , the actual story is pretty different and complex. Retail giants are leveraging a myriad of channels and platforms to take their brands and products to customers across different markets. Now, at first, this sounds like an easy task where you just have to create a few accounts to get going. But, when you’re operating on a massive scale with multiple brands active in multiple countries, the simplicity vanishes. Imagine a beauty conglomerate managing 30 brands in 40 countries. For them, reaching customers in every region through major social channels would mean having 1,200 Instagram accounts, 12,00 Facebook accounts and so on. They’d have to be regularly updated, which means even a small mistake in the product information value chain – be it an inaccurate description or image – could lead to losing not only individual sales but also hard-earned consumer loyalty, revenue and profit. Productsup and its P2C platform Productsup’s product-to-consumer (P2C) platform solves this challenge by managing different P2C information value chains at scale. It gives a 360-degree view of product data and combines it with capabilities such as automated product content syndication, marketplace data management and feed management to make necessary improvements across marketing and selling channels. “Complete visibility of data across all channels, coupled with the ability to make updates in real-time, allows companies to easily manage the product experience,” Productsup said in a blog post. The company currently serves over 900 global brands and retailers with its platform, processing over two trillion products every month. It claims that the platform can reach more than 2500 channels and help companies increase their traffic by up to 1000% and in-store revenue by 20%. The growth numbers also back this promise. Productsup has seen 60% ARR growth in the last twelve months, with a gross revenue retention rate of 90% and a net revenue retention rate of 120%. Additional questions sent by Venturebeat for revenue numbers remained unanswered at the time of writing. Plan ahead With this round of funding, which was led by Europe’s Bregal Milestone, Productsup will accelerate its product development and merger and acquisition efforts as well as expand into new markets to help companies better manage their products within the commerce ecosystem. The goal of the company is to turn the complexity of modern commerce into a competitive advantage — something that will come in very handy to stand out as the competition continues to grow in the retail sector. Globally, ecommerce sales are projected to surpass $5 trillion by 2022. However, it must be noted that Productsup is not the only one building solutions to help companies sell better. There are niche players like Salsify, Zentail, Feedonomics and Shopware as well as giants such as Salesforce and Shopfiy in this space. “Our decision to partner with Productsup was based on its long-term, sustainable trajectory as a mission-critical enterprise-grade commerce solution,” Cyrus Shey, managing partner of Bregal Milestone, said. “Whereas alternative vendors mostly offer point solutions, Productsup uniquely addresses the needs of the evolving commerce market for a single view of all product information value chains and offers seamless, end-to-end product data control – across all global channels and in real-time,” he added. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,007
2,022
"Documenting the NFT voyage: A journey into the future | VentureBeat"
"https://venturebeat.com/2022/03/19/documenting-the-nft-voyage-a-journey-into-the-future"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Documenting the NFT voyage: A journey into the future Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Non-fungible tokens (NFTs) had a fantastic journey in 2021, inaugurating one of the most remarkable episodes in the history of emergent decentralized industries. NFT trading volume stood at $2.5 billion in June 2021. It surged ten times in the next six months, with total NFT sales reaching a whopping $23 billion by December 2021. In contrast, the 2020 total NFT trading volume amounted to just $100 million. To grasp the extraordinary success story of NFTs , we need to chart their trajectory throughout the last year. This article will adopt a chronological approach to explain the NFT craze and what lies ahead of us. Laying the foundations Cryptocurrency historians often debate whether any singular event led to the explosion of NFTs in the crypto domain. While there is no definite answer, the sale of Beeple’s NFT art for $69 million created ripples across the global market. People suddenly saw a spurt in NFT projects with attention-grabbing headlines on newspapers and web portals. Moreover, most of these NFTs reimagined the nature of artwork with their finite series of algorithmically-generated collectibles. CryptoPunks , one of the earliest NFT generative art projects on Ethereum, surpassed $1 billion in total sales in August 2021. A single CryptoPunk collectible sold for $10 million in December, becoming one of the most expensive NFT collectibles. Another popular NFT series to recently cross the $1 billion mark is the Bored Ape Yacht Club (BAYC). These projects became immensely popular with the active support and promotion from NFT influencers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For example, NBA player Stephen Curry bought a BAYC for $180K while hip-hop sensation Eminem bought another BAYC for $500K. The diverse NFT influencers’ community ranges from Reddit co-founder Alexis Ohanian and comedian Steve Harvey to Dallas Mavericks’ owner, Mark Cuban. There are also several anonymous NFT influencers on social media like Artchick , EllioTrades and Gmoney , who help drive some interest in this projects. But it is not just individuals who express bullish sentiments about NFTs. Several mainstream companies are adopting NFTs to diversify their investment strategies. The global payments giant, Visa , bought a CryptoPunk NFT for $150K in August 2021. Adidas , the famous sports brand, purchased a BAYC NFT in September 2021 for $156K. Moreover, some of the most popular NFTs were sold from the nearly 300-year-old auction houses Sotheby’s and Christie’s, which recorded $100 million and $150 million in NFT sales, respectively. However, NFT collectibles are not the only assets driving mainstream crypto adoption among retail and institutional investors. NFT-based play-to-earn games have enormously contributed to the growth of the crypto sector in 2021. Amidst the COVID-19 induced lockdowns and job losses, Southeast Asians turned to NFT games like Axie Infinity. Earnings from NFT gaming have helped a sizable population to bring food to the table. The examples cited above demonstrate that NFTs have become a cultural phenomenon with diverse use cases and utilities. On the one hand, people use NFTs to supplement their monthly income. But on the other hand, NFT collectibles emerge as a status symbol for the wealthy demographic. As a result, people are now putting up their NFTs as profile pictures (PFP) on different social media handles to showcase their collections. So much so that Twitter, which already contemplated NFT verification badges, has now come up with a solution on Twitter Blue. NFTs are unlocking a hitherto unexplored territory of digital ownership and asset provenance utilizing blockchain technology. These verifiable virtual assets are the core components of the emerging metaverse across multiple blockchain networks. However, NFT projects need to address some issues if they wish to sustain themselves in the long run. Sailing through a choppy landscape Presently, a handful of NFT projects are showing signs of instability. For example, developers of the highly successful Pudgy Penguins NFT spent all the treasury funds but failed to deliver on the promised roadmap. As a result, the Penguins community has voted out the founding members through its decentralized governance structure. Apart from that, NFTs have crazy floor price fluctuations, with speculators bidding up the price even in illiquid market conditions. For example, last year, a clip-art rock NFT with no specific utility had an outrageous floor price of $2.2 million. This tendency of some speculative investors to hype up a price metric without reason and rationale can be detrimental. This turbulence in the NFT market is not very surprising. While the technology and concept of NFTs are revolutionary, the NFT sector is still in the embryonic stages. At such an early stage of development, things can be pretty unstable. But NFT projects can succeed if they focus on three essential factors: innovation, community, and ecosystem. The most crucial task for any NFT project is to focus on innovative design and diversified utilities for its users. Moreover, the first-to-market NFT project will always have the edge over other competing projects to generate value. Unfortunately, while making copies of the original (forks) is easy, it does not always translate into a successful project. For example, the legendary Ethereum-based CryptoPunks from Larva Labs is the inspiration behind PolygonPunks residing on the Polygon blockchain. Although PolygonPunks is very successful, many consider it a ‘derivative collection’ that can compromise buyers’ safety. This is why the NFT marketplace OpenSea delisted PolygonPunks after a request from developers at Larva Labs. The second characteristic of a good NFT project is how strong the community is. A genuinely decentralized project with a well-knit community goes a long way in making it a success. As demonstrated above, the Pudgy Penguins and CryptoPunks communities are robust enough to protect the legacy of the projects. Moreover, interoperable NFTs help forge communities across blockchain networks, making them stronger. Another critical factor for consideration is the blockchain on which the NFT resides, since each network ecosystem is different. For example, Ethereum has very high gas fees, with NFT whales holding more than 80% of the blockchain’s NFTs. On the other hand, blockchains like Binance Smart Chain, Solana and Tezos have negligible gas fees. Moreover, many of them are carbon-neutral networks, attracting a lot of environmentally conscious NFT artists. If NFT projects focus on the qualities mentioned above during the developmental stages, most of them will sustain long-term. But what will the NFT landscape look like in the immediate future? Hope on the horizon for NFT-based projects Undoubtedly, 2022 will be a year of mind-boggling innovations and growth in the NFT space. As a result, we might see a steady proliferation of NFT use cases in previously unimagined ways. One such usage can be through NFT-based financial instruments with tokenized insurance, real estate, bonds, debts, and commodities. NFTs can open up new ways of collateralized lending or rent and help in raising capital for startups. Moreover, NFT derivatives might become very popular this year. So, gamers can trade their in-game NFT assets like cars and weapons on the derivatives market, bringing in more liquidity. Additionally, Bluechip NFT Indexes can allow new investors to participate in the most successful NFT projects. Several charitable organizations and companies are using NFTs for fundraising campaigns as well. Although few NFT projects currently offer the services mentioned earlier, they remain immature and underdeveloped. Significant innovations and further diversification in everyday use-cases are yet to reach the people. As the year progresses, the value and applications of NFTs will diversify and thus disrupt a variety of industries. However, the success of the NFT sector will depend to a large extent on how fair, transparent, and secure NFTs are. Game Theory has proven that random numbers are the fundamental building blocks of any fair and safe system. Most blockchain networks, including most NFT protocols, depend on random numbers for their routine system operations. First, they are used in cryptographically generated public-private keys and digital signatures. Second, randomness in input and output programs ensures a fair chance for all participants in NFT-based games. Third, random numbers are crucial for hash power and in Proof-of-Work consensus protocols. With the expansion of the NFT industry, developers will need massive sets of random numbers for their projects. But as the American mathematician Robert Coveyou said , “The generation of random numbers is too important to be left to chance.” Thus, “Random numbers should not be generated with a method chosen at random,” according to Turing Award winner Donald Knuth. Rigorous research and solid science are crucial to generating random numbers. If everything goes well, NFTs are up for a bright future ahead. Felix Xu is the cofounder of ARPA and Bella Protocol. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,008
2,022
"Hackers steal $620M in Ethereum and dollars from Axie Infinity maker Sky Mavis' Ronin network | VentureBeat"
"https://venturebeat.com/2022/03/29/hackers-steal-620m-in-ethereum-and-dollars-in-axie-infinity-maker-sky-mavis-ronin-network"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Hackers steal $620M in Ethereum and dollars from Axie Infinity maker Sky Mavis’ Ronin network Share on Facebook Share on X Share on LinkedIn Axie Infinity lets players battle with NFT Axie characters. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Sky Mavis reported that the Ronin Network which supports its Axie Infinity game has been hacked and thieves stole 173,600 in Ethereum cryptocurrency (worth $594.6 million) and $25.5 million in U.S. dollars, stealing a total of $620 million. If Sky Mavis, the maker of the Axie Infinity blockchain game, can’t recover the funds, that’s a huge hit to its overall treasury and a black eye for blockchain-based security, as the whole point of putting the game on the blockchain — in this case a Layer 2 network dubbed the Ronin Network — is to enable better security. The Ronin bridge and Katana Dex enabling transactions have been halted. For now, that means that players who have funds stored on the network can’t access their money right now. The stolen funds only represent a portion of the overall holdings of Sky Mavis and its Axie decentralized autonomous organization (DAO). “We are working with law enforcement officials, forensic cryptographers, and our investors to make sure all funds are recovered or reimbursed. All of the AXS, RON, and SLP on Ronin are safe right now,” said Sky Mavis in a statement. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The hack will likely be considered one of the biggest hacks in cryptocurrency history , at least according to data from Comparitech. The company said there was a security breach on the Ronin Network itself. Earlier today, the firm discovered that on March 23, Sky Mavis’s Ronin validator nodes and Axie DAO validator nodes were compromised resulting in 173,600 ETH (valued at $594.6 million at the moment) and $25.5 million drained from the Ronin bridge in two transactions. So far, the stolen cryptocurrency hasn’t been transferred from the account that did the attack, the company said. The validator nodes are external entities that verify the information on the blockchain and compare notes with each other to ensure the blockchain’s information is accurate. Blockchain is (believed to be) a secure and transparent digital ledger, and Ethereum is one of the biggest networks based on the technology. Ethereum is both a blockchain protocol as well as the name of the cryptocurrency based on the protocol. Sky Mavis uses the blockchain to verify the uniqueness of nonfungible tokens (NFTs), which can uniquely authenticate digital items such as the Axie creatures used in the Axie Infinity game. NFTs exploded in popularity last year and enabled Sky Mavis to raise $152 million at a $3 billion valuation in October. But blockchain games also a flashpoint in the industry now as critics say they are full of ponzi schemes, rug pulls, and other kinds of anti-consumer scams. Ethereum has its drawbacks, as transactions on it are slow and consume a lot of energy, as it taps a lot of computers worldwide to do the verification work. To alleviate that, companies like Sky Mavis have created Layer 2 solutions such as the Ronin Network. That network can execute transactions far more quickly, inexpensively, and with smaller environmental impacts than doing transactions on Ethereum itself. But this offchain processing comes at a risk, as Sky Mavis has just learned. Sky Mavis set up a network of computing nodes to validate transactions on its Ronin Network, but if hackers can gain 51% control of that network, then they can create fake transactions and steal funds stored on the network. Sky Mavis said that the attacker used hacked private keys in order to forge fake withdrawals. Sky Mavis said it discovered the attack this morning after a report from a user being unable to withdraw 5k ETH from the bridge. Details about the attack Sky Mavis’ Ronin chain currently consists of nine validator nodes. In order to recognize a deposit event or a withdrawal event, five out of the nine validator signatures are needed. The attacker managed to get control over Sky Mavis’s four Ronin validators and a third-party validator run by Axie DAO. The validator key scheme is set up to be decentralized so that it limits an attack vector, similar to this one, but the attacker found a backdoor through Sky Mavis’ gas-free RPC node, which the attacker used to get the signature for the Axie DAO validator. This traces back to November 2021 when Sky Mavis requested help from the Axie DAO to distribute free transactions due to an immense user load. The Axie DAO allowed listed Sky Mavis to sign various transactions on its behalf. This was discontinued in December 2021, but the allow list access was not revoked. Once the attacker got access to Sky Mavis systems they were able to get the signature from the Axie DAO validator by using the gas-free RPC,” Sky Mavis said. “We have confirmed that the signature in the malicious withdrawals match up with the five suspected validators,” said Sky Mavis. Actions taken Sky Mavis said it moved swiftly to address the incident once it became known and it is actively taking steps to guard against future attacks. To prevent further short-term damage, the company has increased the validator threshold from five to eight. “We are in touch with security teams at major exchanges and will be reaching out to all in the coming days,” the company said. “We are in the process of migrating our nodes, which is completely separated from our old infrastructure.” The company has also temporarily paused the Ronin Bridge to ensure no further attack vectors remain open. Binance has also disabled their bridge to/from Ronin to err on the side of caution. The bridge will be opened up at a later date once the company is certain no more funds can be drained. Sky Mavis has also temporarily disabled Katana DEX due to the inability to arbitrage and deposit more funds to Ronin Network. And it is working with Chainalysis to monitor the stolen funds, as transactions on the blockchain can be tracked. Next steps The company said it is working directly with various government agencies to ensure the criminals get brought to justice. “We are in the process of discussing with Axie Infinity / Sky Mavis stakeholders about how to best move forward and ensure no users’ funds are lost,” the company said. Originally, Sky Mavis chose the five out of nine threshold for validators as some nodes didn’t catch up with the chain, or were stuck in syncing state. Moving forward, the threshold will be eight out of nine. The company will be expanding the validator set over time, on an expedited timeline. Most of the hacked funds are still in the alleged hacker’s wallet: https://etherscan.io/address/0x098b716b8aaf21512996dc57eb0615e2383e2f96 [Update: Blockchain Intelligence Group, a global cryptocurrency intelligence and compliance company, said the money has now been moved elsewhere and they are tracking it. Here’s the details: Funds sent to exchanges: FTX (Exchange): 1,219.982731106253 ETH Crypto (Exchange): 1 ETH Huobi (Exchange): 3,750 ETH So far 4,970 ETH ($16,931,672.478) has already moved to exchanges. The amount unspent in 4 addresses could potentially move in the same direction. And the Total unspent amount in these addresses: 177,192.66 ETH.] Sky Mavis is figuring out exactly how this happened. “As we’ve witnessed, Ronin is not immune to exploitation and this attack has reinforced the importance of prioritizing security, remaining vigilant, and mitigating all threats. We know trust needs to be earned and are using every resource at our disposal to deploy the most sophisticated security measures and processes to prevent future attacks,” Sky Mavis said. The company said that ETH and USDC deposits on Ronin have been drained from the bridge contract. Sky Mavis said it is working with law enforcement officials, forensic cryptographers, and our investors to make sure there is no loss of user funds. All of the AXS, RON, and SLP on Ronin are safe right now, the company said. “As of right now users are unable to withdraw or deposit funds to Ronin Network. Sky Mavis is committed to ensuring that all of the drained funds are recovered or reimbursed,” the company said. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,009
2,022
"The creation of the metaverse: The market | VentureBeat"
"https://venturebeat.com/2022/05/21/the-creation-of-the-metaverse-the-market"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The creation of the metaverse: The market Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Stephen King said it best when he wrote, “sooner or later, everything old is new again.” And that’s precisely what’s going on when it comes to the metaverse. It’s true; the technologies driving this incredible space forward are anything but old. However, the concept itself dates back nearly 30 years. Love them or hate them, Facebook deserves a ton of credit for bringing the metaverse mainstream after they rebranded themselves as Meta. But the idea is anything but original. The metaverse’s origins can be traced to 1992. The same year Kris Kross was making us “Jump,” and dial-up internet access first became available. The creation of the metaverse 1992 was also the year Neal Stephenson first used the term “metaverse” within the pages of his dystopian novel “Snow Crash.” But regardless of who gets credit, the metaverse is now as much a part of popular tech culture as things like blockchain, AR/VR, AI and quantum computing. All of which, by the way, are now combining to drive what the metaverse will eventually be. And that’s exactly what makes it so exciting. As Dating Group chief strategy officer KJ Dhaliwal explains it: Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “The metaverse is the culmination of many different amazing technologies. Because of that, it offers immense potential to revolutionize our lives and how we communicate, transact business and play. And one of the metaverse’s most exciting features is presence, which is a sense that you’re physically in a digital space with others.” The metaverse is here and now Sound too philosophical or futuristic? Then you aren’t seeing what’s before your very eyes. Many of us are already meeting in virtual spaces , daily. But we’re only scratching the surface of what’s to come. However, there is much work to do in order for the market to mature. There is no industry standard for what the metaverse really is at the moment. The metaverse is still very much an open frontier. And that is a big reason why people are so confused by it. So what is the metaverse, anyway? Many feel it is simply a 3D model of the internet. At the same time, others take a more extreme view. They see it as a parallel universe of sorts, where the physical completely connects to the digital in a singularity known as the “phygital.” The market opportunity However, going from the simple to the extreme will take a lot of technology. And new technologies and services are where the market opportunity is for entrepreneurs and investors wanting to strike gold within the metaverse. And it seems there is a lot of gold to be had. According to Statista , we are just barely scratching the surface of what the metaverse’s market value will one day be. The company pins its 2022 market value at a few shades north of $47 billion. However, they anticipate it will surge to $678.8 billion by 2030. That means there will potentially be a few more billionaires over the next eight years. That’s exciting math, no doubt. But it makes one wonder where the real opportunities are. Should you create a new AR or VR startup? Will AI bring the magic? Or, maybe supercomputing is where you should spend your cycles? Where to stake your claim To come to any conclusion, one must first understand where the market is right now, the obstacles that stand in the way of progress and where the market is naturally positioned to go. And virtual reality (in its current shape) is probably not where you should spend your time and resources. A recent report by Piper Sandler found that 50% of the Gen Z’ers surveyed don’t plan to purchase a VR headset any time soon. And it’s not because they already have two or three. A mere 26% admitted to owning a single VR device. But it gets worse. Less than 5% of those that have a headset use it daily. The key phrase to remember is, “daily use.” If only 5% of the youngest generation with buying power is using something, run away from it. And run away fast. But what are they using daily, you ask? Consider Gen Z According to the Los Angeles Times , Gen Z spends a lot of time on screens (mostly their smartphones). According to the paper, they watch an average of 7.2 hours of video a day, which is nearly an hour more than the 6.3 hours spent by Gen X. And, as the saying goes, old habits are hard to break. Given the wide adoption of mobile and the incredible number of hours that generations old and new spend on their screens, smartphones have a good chance of reigning supreme when it comes to the metaverse. But they cannot do it in their current state. There needs to be a new technology that connects our smartphones to our realities in a way that doesn’t tether us to a headset or similar apparatus. And that next big thing is already in the works. The blossoming of 5G, the evolution of computing and the smartening of AI have opened the door to the next big thing. Brain-computer interface (BCI) technology is the thing that will complete the marriage between the physical and the digital. But the fledgling field needs more experts and investors to help push its advancement along. The future is BCI There is momentum, however. NextMind, which VentureBeat covered at the end of 2020, was recently acquired by Snap for an undisclosed sum. And in a company statement, Snap wrote , “NextMind has joined Snap to help drive long-term augmented reality research efforts within Snap Lab. Spectacles are an evolving, iterative research and development project, and the latest generation is designed to support developers as they explore the technical bounds of augmented reality.” Unlike its annoying VR headset cousin, AR — or better yet, MR (mixed reality) — done well can be a discreet technology that blends our physical and digital worlds in real-time. And this will be made more profound with advances in BCI technology. Don’t misunderstand, though. There are still a lot of picks and shovels to be sold before finding the motherload with BCI. Veljko Ristic is Chief Growth Officer at SDV Lab. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,010
2,022
"What are the security risks of open sourcing the Twitter algorithm? | VentureBeat"
"https://venturebeat.com/2022/05/27/open-source-twitter-security-risks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What are the security risks of open sourcing the Twitter algorithm? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It has been just over a month since Elon Musk announced his intention to open source the Twitter algorithm to increase transparency of the platform’s use of artificial intelligence (AI) and machine learning (ML) to promote or demote posts. The decision has generated a lively debate on all sides, as well as in the security industry, where experts are divided on whether open sourcing the algorithm will be a net positive for security or not. Musk’s idea to take Twitter open source could highlight vulnerabilities on the level of Log4Shell and Spring4Shell to the site, according to critics. Yet for supporters , the decision could even enhance the platform’s security. The bad: Attackers may have a chance to find entry points One of the largest security risks of making the code open-source is that it provides threat actors with a chance to analyze it for security vulnerabilities. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Open[ing] up Twitter’s recommendation algorithms is a two-edged sword. While having more eyes on the code can promote better security, it also leaves the door open for malicious researchers to gain insights they wouldn’t ordinarily have,” said Mike Parkin senior technical engineer at Vulcan Cyber. As a cyberrisk management specialist, Parkin suggests that opening the recommendation algorithm could enable “disinformation” to spread on the platform further as interested parties learn to manipulate it and sidestep moderator’s checks and balances — while giving users multiple versions of the platform to patch. The good: Increased transparency to mitigate vulnerabilities On the other side of the debate, other analysts and security experts recommend that increasing transparency over the platform is a positive, because it allows the platform’s user base a chance to play a role in vulnerability management. Instead of Twitter having a small team of researchers managing vulnerabilities, opening the code could potentially provide them with support from thousands of users, who can help improve the platform’s security and integrity. “When discovering vulnerabilities in software, access to source code is analogous to a factor having access to an MRA when diagnosing illness. An ‘inside-out’ view will always be more useful and complete than one formed by looking only from the outside in,” said Casey Ellis, founder and CTO at Bugcrowd. “We see this all the time in crowdsourced security testing, and the security advantage for Twitter will be more thorough feedback from the crowd around issues that need to be fixed.” Ellis adds that while it does provide attackers an opportunity to identify vulnerabilities, whether the security implications are positive or negative will come down to Twitter’s ability to invest vulnerability information and fix flaws before they are exploited. How enterprises can help mitigate the risks While it remains unclear what the impact of open sourcing the algorithm will have, there are some simple steps organizations can take to help mitigate the risks. Principal security strategist at Synopsys Software Integrity Group, Tim Mackey, believes that an open-source governance program could help to address the risks effectively. “Businesses can mitigate some of that risk by identifying which open-source components are powering the Twitter open-source technologies and then implementing an open-source governance program for them,” Mackey said. “Such a program would proactively monitor for new vulnerability disclosures for these components, and enable a business to react quickly to the change in risk. This is similar to the proactive model some businesses used to minimize their exposure to the Log4Shell vulnerability.” Mackey recommends that enterprises implement an open-source governance program for the open-source components powering Twitter’s technologies, to proactively monitor for new vulnerability disclosures so that security teams are prepared to address them. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,011
2,021
"Pinecone CEO on bringing vector similarity search to dev teams | VentureBeat"
"https://venturebeat.com/2021/08/02/pinecone-ceo-on-bringing-vector-similarity-search-to-dev-teams"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Pinecone CEO on bringing vector similarity search to dev teams Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The traditional way for a database to answer a query is with a list of rows that fit the criteria. If there’s any sorting, it’s done by one field at a time. Vector similarity search looks for matches by comparing the likeness of objects, as captured by machine learning models. Pinecone.io brings “vector similarity” to the average developer by offering turnkey service. Vector similarity search is particularly useful with real-world data because that data is often unstructured and contains similar yet not identical items. It doesn’t require an exact match because the so-called closest value is often good enough. Companies use it for things like semantic search, image search, and recommender systems. Success often depends upon the quality of the algorithm used to turn the raw data into a succinct vector embedding that effectively captures the likeness of objects in a dataset. This process must be tuned to the problem at hand and the nature of the data. An image search application, for instance, could use a simple model that turns each image into a vector filled with numbers representing the average color in each part of the image. Deep learning models that do something much more elaborate than that are very easy to get nowadays, even from deep learning frameworks themselves. We sat down with Edo Liberty, the CEO and one of the founders of Pinecone, and Greg Kogan, the VP of marketing, to talk about how they’re turning this mathematical approach into a Pinecone vector database that a development team can deploy with just a few clicks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! VentureBeat: Pinecone specializes in finding vector similarities. There have always been ways to chain together lots of WHERE clauses in SQL to search through multiple columns. Why isn’t that good enough? What motivated Pinecone to build out the vector distance functions and find the best? Edo Liberty: Vectors are by no means new things. They have been a staple of large-scale machine learning and a part of machine learning-driven services for at least a decade now in larger companies. It’s been kind of “table stakes” for the bigger companies for at least a decade now. My first startup was based on technologies like this. Then, we used it at Yahoo. Then, we built another database that deployed it. It’s a big part of image recognition algorithms and recommendation engines, but it really didn’t hit the mainstream until machine learning. In pretrained models, AI scientists started generating these embeddings in vector representations of complex objects pretty much for everything. So it just became a lot lower and became a lot more common. People suddenly started having these vectors and suddenly, it’s like they are asking “OK, what now?” Greg Kogan: The reason why clauses fall short is that they are only as useful as the number of facets that you have. You can string together WHERE clauses, but it won’t produce a ranked answer. Even for something as common as semantic search, once you can get a vector embedding of your text document, you can measure the similarity between documents much better than if you’re stringing together words and just looking for keywords within the document. Other things we’re hearing is search for other unstructured data types like images or audio files. Things like that where there was no semantic search before. But now, they can convert unstructured data into vector embeddings. Now you can do vector similarity search on those items and do things like find similar images or find similar products. If you do it on user behavior data or event logs, you can find similar events, similar shoppers, and so on. ‘Once it’s a vector, it’s all the same to us’ VentureBeat: What kind of preprocessing do you need to do to get to the point where you’ve got the vector? I can imagine what it might be for text, but what about other domains like images or audio? Kogan: Once it’s a vector, it’s all the same to us. We can perform the same mathematical operations on it. From the user’s point of view, they would need to find an embedding model that works with their type of data. So for images, there are many computer vision models available off the shelf. And if you’re a larger company with your own data science team, you’re most likely developing your own models that will transform images into vector embeddings. It’s the same thing for audio. There’s wav2vec for audio, for instance. For text and images, you can find loads of off-the-shelf models. For audio and streaming data, they’re hard to find so it does take some data science work. So the companies that have the most pressing need for this are those more advanced companies that have their own data science teams. They’ve done all the data science work and they see that there’s a lot more they can do with those vectors. VentureBeat: Are any of the models more attractive, or does it really involve a lot of domain-specific kind of work? Kogan: The off-the-shelf models are good enough for a lot of use cases. If you’re using basic semantic search over documents, you can find some off-the-shelf models, like sentence embeddings and things like that. They are fine. If your whole business depends on some proprietary model, you may have to do it on your own. Like if you’re a real estate startup or financial services startup and your whole secret sauce is being able to model something like financial risk or the price of a house, you’re going to invest in developing your own models. You could take some off-the-shelf model and retrain it on your own data to eke out some better performance from it. Large banks of questions generate better results VentureBeat: Are there examples of companies that have done something that really surprised you, that built a model that turned out to be much better than you thought it would even end up? Liberty: If you have a very large bank of questions and good answers to those questions, a common and reasonable approach is to look for what is the most similar question and just return the best answer that you have for this other question, right? It sounds very simplistic, but it actually does a really good job, especially if you have a large bank of questions and answers. The larger the collection, the better the results Kogan: We didn’t even realize it could be applicable for bot detection and image duplication. So if you’re a consumer company that allows uploading of images, you may have a bot problem where a user uploads some bad images. But once that image is banned, they try to upload a slightly tweaked version of that image. Simply looking up a hash of that image is not going to find you a match. But if you look for similarity, like closely similar images, you suspend that account immediately or at least flag it for review. We’ve also heard this for financial services organizations, where they get way more applications than they can manually review. So they want to flag applications that resemble previously flagged fraudulent applications. VentureBeat: Is your technology proprietary? Did you build this on some kind of open source code? Or is it some mixture? Kogan: At the core of Pinecone is a vector search library that is a proprietary index. A vector index. We find that people don’t care so much about exactly which index it is or whether it’s proprietary or open source. They just want to add this capability to their application. How can I do that quickly and how can I scale it up? Does it have all the features we need? Does it maintain its speed and accuracy at scale? And who manages the infrastructure? Liberty: We do want to contribute to the open source community. And we’re thinking about our open core strategy. It’s not unlikely that we will support open source indexes publicly soon. What Greg said is accurate. I’m just saying that we are big fans of the open source community and we would love to be able to contribute to it as well. VentureBeat: Now it seems that if you’re a developer that you don’t necessarily integrate it with any of the databases per se. You just kind of side-load the data into Pinecone. When you query, it returns some kind of key and you go back to the traditional database to figure out what that key means. Kogan: Exactly right. Yes, you’re running it alongside your warehouse or data lake. Or you might be storing the main data anywhere. Soon we’ll actually be able to store more than just the key in Pinecone. We’re not trying to be your source of truth for your user database or your warehouse. We just want to eliminate the round trips. Once you find your ranked results or similar items, then we’ll have a bit more there. If all you want is the S3 location of that item or the user ID, you’ve got it in your results. More flexibility on pricing VentureBeat : On pricing, it looks like you just load everything into RAM. Your prices are determined by how many vectors you have in the dataset. Kogan: We used to have it that way. We recently started letting some users have a little bit more control over things like the number of shards and replicas. Especially if they want to increase their throughput. Some companies come to us with insanely high throughput demands and latency demands. When they sign up and they create an index, they can choose to have more shards and more replicas for higher availability and throughput. In that case, you still have the same amount of data, but because it’s being replicated, you’re going to pay more because you’re looking for data on more machines. VentureBeat: How do you handle the jobs where companies are willing to wait a little bit and don’t care about a cold start? Kogan: For some companies, the memory-based pricing doesn’t make sense. So we’re happy to work with companies to find another model. Liberty: What you’re asking about is a lot more fine-grained control over costs and performance. We do work with larger customers and larger teams. We just sat down with a very large company today. The workload is 50 billion vectors. Usually, we have a very tight response time. Let’s say 20, 30, 40, 50 milliseconds is typical 99% of the time. But they say that this is an analytical workload and we are happy to have a full second latency or even two seconds. That means they can pay less. We are very happy to work with customers and find trade-offs, but it’s not something that’s open in the API today. If you sign in on the website and use the product, you won’t have those options available to you yet. Kogan: We simplified the self-serve pricing on the website to make it easier for people to just jump in and play around with it. But once you have 50 billion vectors and crazy performance or scale requirements, come talk to us. We can make it work. Our initial bet was that more and more companies would use vector data as machine learning models become more prevalent and the data scientists become more productive. They realize that you can do a lot more with your data, once it’s going to a vector format. You can collect less of it and still succeed. There are privacy and consumer protection implications as well. It’s becoming less and less extreme of a bet. We are seeing the early adopters, the most advanced companies have already done this. They’re using vector similarity search and using recommendation systems for their search results. Facebook uses them for their feed ranking. The vision is that more companies will leverage vector data for recommendation and many use cases still to be discovered. Liberty: The leaders already have it. It’s already happening. It’s more than just a trend. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,012
2,021
"The future of AI deployments reaching production is bright in 2021 | VentureBeat"
"https://venturebeat.com/2020/12/16/the-future-of-ai-deployments-reaching-production-is-bright-in-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored The future of AI deployments reaching production is bright in 2021 Share on Facebook Share on X Share on LinkedIn Presented by Appen This article is fifth in a 5-part series on predictions in AI in 2021 — catch up on the first, second, third, and fourth in the series. Most artificial intelligence (AI) projects fail. About 80% never reach deployment, according to Gartner, and those that do are only profitable about 60% of the time. When we take a moment to consider the signs of successful AI all around us, these numbers may come as a surprise. We have voice assistants for our phones and homes, optimized online product searches, advanced fraud detection at our banks, and more. Yet as it stands now, we’ll never see the fruits of the majority of AI endeavors. In the final part of our five-part series on 2021 predictions, we look at the future of successful AI deployments. These statistics might seem disheartening for companies that are turning to AI for positive impacts like greater revenue, lower costs, and more personalized, effective customer experiences, but we’re seeing signs of promise. In 2021, we predict that companies will start to overcome the 80% failure rate of deployment. Gartner has further predicted that by 2024, 75% of organizations will shift from piloting to operationalizing AI. This change in momentum will be driven by greater accessibility to data and the development of highly flexible models to adapt to specific business needs. Why do AI projects fail? Considering AI has been around since the 1950s in some form, one might wonder why we don’t have a perfect blueprint yet for deploying it successfully. In reality, there are tons of variables that go into building effective AI, which makes it difficult to prescribe set steps that will work well every time, for every company. Still, progress is being made in gathering best practices (mainly through learnings from success stories and failures), and with those, common patterns are emerging on what often leads to failure. Here are a few areas where companies can go wrong: They didn’t define a narrow business problem. Many organizations choose the wrong problem to solve. They may select something that’s too general, resulting in a model that’s useless for specific business use cases. They might choose a problem without enough data to support the solution. They also might choose a problem that would be better solved through something other than AI. Any time there’s a misalignment with business priorities, problems will occur. It’s also important to ensure all stakeholders, from the top down, are clear on the objectives of the project. They don’t have the right team. AI has a talent gap problem, which means companies often have to scramble to recruit team members with the right skillsets for building effective AI. Most organizations are currently ill-designed to support scalable AI ventures, requiring re-orgs, new hiring efforts, and leveraging of third-party resources. They don’t have enough high-quality data. To make accurate predictions, AI models need a massive amount of data. These models must be trained to handle any potential use case they’ll face in production, which means datasets must cover a wide range of use and edge cases. Many companies fail to collect the appropriate amounts of data for their models, and have poor data management techniques for accurately labeling that data. This results in poor decision-making by the model. They didn’t confront bias head-on. In our diverse, global business world, it’s vital to take a responsible approach to AI from the start of model build and beyond. Most companies don’t set out intending to create biased models, but do so accidentally by failing to include diverse perspectives and data in their processes. Of the projects that are deployed successfully, many face challenges with model drift — or changing external conditions — that lower the model’s accuracy or even make it obsolete. Models must consistently be retrained with new, relevant data to overcome this hurdle. With all of these factors in mind and the many additional aspects that weren’t highlighted, it may be clearer now how difficult it is to deploy (and maintain) AI successfully. Looking forward Overcoming the failure rate will be challenging for companies large and small, but the future’s looking brighter for a number of reasons. The first is that data is more accessible and abundant than ever. When we recall that effective machine learning is built upon a bounty of data, we should expect to see models that produce greater accuracy, models that cover more specific use cases where data may have been hard to come by in the past, and models that work better for more end users. The second trend we’re seeing is that we’re learning. The wealth of information available has allowed companies to see what others are doing in the space. Companies are conquering the mistakes of the past by gaining knowledge around best practices and common pitfalls. More resources are available than ever before. For example, Alyssa Simpson Rochwerger and Wilson Pang’s upcoming book, Real World AI: A Practical Guide to Responsible Machine Learning , includes real-world success and failure stories, and clear action plans for lucrative deployments. With these tools under their belt, we expect more companies pursuing AI to succeed. Despite the pandemic, there’s a great chance AI will accelerate because of it. Already, social distancing has resulted in the development of AI that’s more flexible to changing supply chains and customer demands. This flexibility, bolstered by ML techniques like reinforcement learning, may create AI solutions that are increasingly adaptable to change, and therefore more effective post-pandemic and longer-term. Considering these factors collectively, the future for AI projects holds promise. The companies that learn, adapt, and mobilize quickly will be the frontrunners in the space; they’ll be more equipped to reach production, and ultimately, the holy grail of profitably. 2021 will serve as a critical turning point for AI ventures, as the tides shift away from failure and toward success. At Appen, we have spent over 20 years annotating and collecting data using the best-of-breed technology platform and leveraging our diverse crowd to help ensure you can confidently deploy your AI models. For a practical guide to responsible machine learning and successful deployments, check out the upcoming book by Alyssa Simpson Rochwerger and Wilson Pang, Real World AI: A Practical Guide to Responsible Machine Learning. Wilson Pang is CTO at Appen. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,013
2,021
"How to turn AI failure into AI success | VentureBeat"
"https://venturebeat.com/2021/12/03/how-to-turn-ai-failure-into-ai-success"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How to turn AI failure into AI success Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The enterprise is rushing headfirst into AI-driven analytics and processes. However, based on the success rate so far, it appears there will be a steep learning curve before it starts to make noticeable contributions to most data operations. While positive stories are starting to emerge, the fact remains that most AI projects fail. The reasons vary, but in the end, it comes down to a lack of experience with the technology, which will most certainly improve over time. In the meantime, it might help to examine some of the pain points that lead to AI failure to hopefully flatten out the learning curve and shorten its duration. AI’s hidden functions On a fundamental level, says researcher Dan Hendrycks of UC Berkeley , a key problem is that data scientists still lack a clear understanding of how AI works. Speaking to IEEE Spectrum, he notes that much of the decision-making process is still a mystery, so when things don’t work out, it’s difficult to ascertain what went wrong. In general, however, he and other experts note that only a handful of AI limitations are driving many failures. One of these is brittleness — the tendency for AI to function well when a set pattern is observed, but then fail when the pattern is altered. For instance, most models can identify a school bus pretty well, but not when it is flipped on its side after an accident. At the same time, AIs can quickly “forget” older patterns once they have been trained to spot new ones. Things can also go south when AI’s use of raw logic and number-crunching leads it to conclusions that defy common sense. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Another contributing factor to AI failure is that it represents such a massive shift in the way data is used that most organizations have yet to adapt to it on a cultural level. Mark Montgomery, founder and CEO of AI platform developer KYield, Inc. , notes that few organizations have a strong AI champion at the executive level, which allows failure to trickle up from the bottom organically. This, in turn, leads to poor data management at the outset, as well as ill-defined projects that become difficult to operationalize, particularly at scale. Maybe some of the projects that emerge in this fashion will prove successful, but there will be a lot of failure along the way. Clear goals To help minimize these issues, enterprises should avoid three key pitfalls, says Bob Friday, vice president and CTO of Juniper’s AI-Driven Enterprise Business. First, don’t go into it with vague ideas about ROI and other key metrics. At the outset of each project, leaders should clearly define both the costs and benefits. Otherwise, you are not developing AI but just playing with a shiny new toy. At the same time, there should be a concerted effort to develop the necessary AI and data management skills to produce successful outcomes. And finally, don’t try to build AI environments in-house. The faster, more reliable way to get up and running is to implement an expertly designed, integrated solution that is both flexible and scalable. But perhaps the most important thing to keep in mind, says Emerj’s head of research, Daniel Faggella , is that AI is not IT. Instead, it represents a new way of working in the digital sphere, with all-new processes and expectations. A key difference is that while IT is deterministic, AI is probabilistic. This means actions taken in an IT environment are largely predictable, while those in AI aren’t. Consequently, AI requires a lot more care and feeding upfront in the data conditioning phase, and then serious follow-through from qualified teams and leaders to ensure that projects do not go off the rails or can be put back on track quickly if they do. The enterprise might also benefit from a reassessment of what failure means and how it affects the overall value of its AI deployments. As Dale Carnegie once said, “Discouragement and failure are two of the surest stepping stones to success.” In other words, the only way to truly fail with AI is to not learn from your mistakes and try, try again. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,014
2,022
"What business executives need to know about AI | VentureBeat"
"https://venturebeat.com/2022/03/07/artificial-intelligence"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What business executives need to know about AI Share on Facebook Share on X Share on LinkedIn Numerai founder and CEO Richard Craib Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Virtually every enterprise decision-maker across the economic spectrum knows by now that artificial intelligence (AI) is the wave of the future. Yes, AI has its challenges and its ultimate contribution to the business model is still largely unknown, but at this point it’s not a matter of whether to deploy AI but how. For most of the C-suite, even those running the IT side of the house, AI is still a mystery. The basic idea is simple enough – software that can ingest data and make changes in response to that data — but the details surrounding its components, implementation, integration and ultimate purpose are a bit more complicated. AI isn’t merely a new generation of technology that can be provisioned and deployed to serve a specific function; it represents a fundamental change in the way we interact with the digital universe. Intelligent oversight of AI So even as the front office is saying “yes” to AI projects left and right, it wouldn’t hurt to gain a more thorough understanding of the technology to ensure it is being employed productively. One of the first things busy executives should do is gain a clear understanding of AI terms and the various development paths currently underway, says Mateusz Lach , AI and digital business consultant at Nexocode. After all, it’s difficult to push AI into the workplace if you don’t understand the difference between AI, ML, DL and traditional software. At the same time, you should have a basic working knowledge of the various learning models being employed (reinforcement, supervised, model-based …), as well as ways AI is used (natural language processing, neural networking, predictive analysis, etc.) VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With this foundation in hand, it becomes easier to see how the technology can be applied to specific operational challenges. And perhaps most importantly, understanding the role of data in the AI model, and how quality data is of prime importance, will go a long way toward making the right decisions as to where, when and how to employ AI. It should also help to understand where the significant challenges lie in AI deployment, and what those challenges are. Tech consultant Neil Raden argues that the toughest going lies in the “last mile” of any given project, where AI must finally prove that it can solve problems and enhance value. This requires the development of effective means of measurement and calibration, preferably with the capability to place results in multiple contexts given that success can be defined in different ways by different groups. Fortunately, the more experience you gain with AI the more you will be able to automate these steps, and this should lessen many of the problems associated with the last mile. View from above Creating the actual AI models is best left to the line-of-business workers and data scientists who know what needs to be done and how to do it, but it’s still important for the higher ups to understand some of the key design principles and capabilities that differentiate successful models from failures. Andrew Clark, CTO at AI governance firm Monitaur , says models should be designed around three key principals: Context – the scope, risks, limitations and overall business justification for the model should be clearly defined and well-documented Verifiability – each decision and step in the development process should be verified and interrogated to understand where data comes from, how it was processes and what regulatory factors should come into play Objectivity – ideally, the model should be evaluated and understood by someone not involved in the project, which is made easier if it has been designed around adequate context and verifiability. As well, models should exhibit a number of other important qualities, such as reperformance (aka, consistency), interpretability (the ability to be understood by non-experts), and a high degree of deployment maturity, preferably using standard processes and governance rules. Like any enterprise initiative, the executive view of AI should center on maximizing reward and minimizing risk. A recent article from PwC in the Harvard Business Review highlights some ways this can be done, starting with the creation of a set of ethical principles to act as a “north star” for AI development and utilization. Equally important is establishing clear lines of ownership over each project, as well as building a detailed review and approval process at multiple stages of the AI lifecycle. But executives should guard against letting these safeguards become stagnant, since both the economic conditions and regulatory requirements governing the use of AI will likely be highly dynamic for some time. Above all, enterprise executives should strive for flexibility in their AI strategies. Like any business resource, AI must prove itself worthy of trust, which means it should not be released into the data environment until its performance can be assured – and even then, never in a way that cannot be undone without painful consequences to the business model. Yes, the pressure to push AI into production environments is strong and growing stronger, but wiser heads should know that the price of failure can be quite high, not just for the organization but individual careers as well. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,015
2,019
"Google launches TensorFlow 2.0 with tighter Keras integration | VentureBeat"
"https://venturebeat.com/2019/09/30/google-launches-tensorflow-2-0-with-tighter-keras-integration"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google launches TensorFlow 2.0 with tighter Keras integration Share on Facebook Share on X Share on LinkedIn TensorFlow and Keras Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google open source machine learning library TensorFlow 2.0 is now available for public use, the company announced today. The alpha version of TensorFlow 2.0 was first made available this spring at the TensorFlow Dev Summit alongside TensorFlow Lite 1.0 for mobile and embedded devices , and other ML tools like TensorFlow Federated. TensorFlow 2.0 comes with a number of changes made in an attempt to improve ease of use, such as the elimination of some APIs thought to be redundant and a tight integration and reliance on tf.keras as its central high-level API. Initial integration with the Keras deep learning library began with the release of TensorFlow 1.0 in February 2017. It also promises three times faster training performance when using mixed precision on Nvidia’s Volta and Turing GPUs, and eager execution by default means the latest version of TensorFlow delivers runtime improvements. The TensorFlow framework has been downloaded more than 40 million times since it was released by the Google Brain team in 2015, TensorFlow engineering director Rajat Monga told VentureBeat earlier this year. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The news was announced today ahead of TensorFlow World, an inaugural conference for developers set to take place October 28-31 in Santa Clara, California. In other recent news, Google AI researchers have rolled out a series of natural language understanding breakthroughs, like a multilingual model trained to recognize nine Indian languages. Last week researchers shared news they created ALBERT , a conversational AI model that now sits atop the SQuAD and GLUE performance benchmark leaderboards. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,016
2,021
"Meta launches PyTorch Live to build AI-powered mobile experiences | VentureBeat"
"https://venturebeat.com/2021/12/01/meta-launches-pytorch-live-a-set-of-tools-for-building-ai-powered-mobile-experiences"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meta launches PyTorch Live to build AI-powered mobile experiences Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. During its PyTorch Developer Day conference, Meta (formerly Facebook) announced PyTorch Live, a set of tools designed to make AI-powered experiences for mobile devices easier. PyTorch Live offers a single programming language — JavaScript — to build apps for Android and iOS, as well as a process for preparing custom machine learning models to be used by the broader PyTorch community. “PyTorch’s mission is to accelerate the path from research prototyping to production deployment. With the growing mobile machine learning ecosystem, this has never been more important than before,” a spokesperson told VentureBeat via email. “With the aim of helping reduce the friction for mobile developers to create novel machine learning-based solutions, we introduce PyTorch Live: a tool to build, test, and (in the future) share on-device AI demos built on PyTorch.” PyTorch Live PyTorch, which Meta publicly released in January 2017, is an open source machine learning library based on Torch, a scientific computing framework and script language that is in turn based on the Lua programming language. While TensorFlow has been around slightly longer (since November 2015), PyTorch continues to see a rapid uptake in the data science and developer community. It claimed one of the top spots for fast-growing open source projects last year, according to GitHub’s 2018 Octoverse report , and Meta recently revealed that in 2019 the number of contributors on the platform grew more than 50% year-over-year to nearly 1,200. PyTorch Live builds on PyTorch Mobile , a runtime that allows developers to go from training a model to deploying it while staying within the PyTorch ecosystem, and the React Native library for creating visual user interfaces. PyTorch Mobile powers the on-device inference for PyTorch Live. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! PyTorch Mobile launched in October 2019, following the earlier release of Caffe2go , a mobile CPU- and GPU-optimized version of Meta’s Caffe2 machine learning framework. PyTorch Mobile can launch with its own runtime and was created with the assumption that anything a developer wants to do on a mobile or edge device, the developer might also want to do on a server. “For example, if you want to showcase a mobile app model that runs on Android and iOS, it would have taken days to configure the project and build the user interface. With PyTorch Live, it cuts the cost in half, and you don’t need to have Android and iOS developer experience,” Meta AI software engineer Roman Radle said in a prerecorded video shared with VentureBeat ahead of today’s announcement. Built-in tools PyTorch Live ships with a command-line interface (CLI) and a data processing API. The CLI enables developers to set up a mobile development environment and bootstrap mobile app projects. As for the data processing API, it prepares and integrates custom models to be used with the PyTorch Live API, which can then be built into mobile AI-powered apps for Android and iOS. In the future, Meta plans to enable the community to discover and share PyTorch models and demos through PyTorch Live, as well as provide a more customizable data processing API and support machine learning domains that work with audio and video data. “This is our initial approach of making it easier for [developers] to build mobile apps and showcase machine learning models to the community,” Radle continued. “It’s also an opportunity to take this a step further by building a thriving community [of] researchers and mobile developers [who] share and utilize pilots mobile models and engage in conversations with each other.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,017
2,021
"AI inference acceleration on CPUs | VentureBeat"
"https://venturebeat.com/2021/12/09/ai-inference-acceleration-on-cpus"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored AI inference acceleration on CPUs Share on Facebook Share on X Share on LinkedIn Presented by Intel The vast proliferation and adoption of AI over the past decade has started to drive a shift in AI compute demand from training to inference. There is an increased push to put to use the large number of novel AI models that we have created across diverse environments ranging from the edge to the cloud. AI Inference refers to the process of using a trained neural network model to make a prediction. AI training on the other hand refers to the creation of the said model or machine learning algorithm using a training dataset. Inference and training along with data engineering are the key stages of a typical AI workflow. The workloads associated with the various stages of this workflow are diverse and no single processor, whether a Central Processing Unit (CPU), Graphics Processing Unit (GPU), Field Programmable Gate Arrays (FPGA) or Artificial Intelligence (AI) accelerator, works best for your entire pipeline. Let us delve deeper into AI Inference and its applications, the role of software optimization, and how CPUs and particularly Intel® CPUs with built-in AI acceleration deliver optimal AI Inference performance, while looking at a few interesting use case examples. Not only has my work in AI involved applications in a number of meaningful fields ranging from healthcare to social good, but I have also been able to apply AI to one of my biggest passions — art. I really enjoy combining my hobbies such as painting and embroidery with AI. An example of this is where I was able to use the Neural Style Transfer technique to blend my artwork into the style of famous painters, photos of my friends and pets, or even an Intel microprocessor. We just might have an engaging, hands-on Neural Style Transfer demo for you at the end of the article. Let’s get started! AI Inference as part of the end-to-end AI workflow AI, at its essence, converts raw data into information and actionable insights through three stages — data engineering, AI training, and AI inference/deployment. Intel provides a heterogeneous portfolio of AI-optimized hardware combined with a comprehensive suite of AI tools and framework optimizations to accelerate every stage of the end-to-end AI workflow. Fig 1: Inference as part of the End-to-End AI Workflow With the amount of focus that has traditionally been paid to training in model-centric AI over the years and the more recent focus on data engineering and data-centric AI, inference can seem to be more of an afterthought. However, applying what is learnt during the training phase to deliver answers to new problems, whether on the cloud or at the edge, is where the value of AI is derived. Edge inferencing continues to explode across smart surveillance, autonomous machines, and various real-time IOT applications whereas cloud inferencing already has vast usage across fraud detection, personalized recommendations, demand forecasting, and other applications which are not as time-critical and might need greater data processing. Challenges with AI Inference deployment Deploying a trained model for inference can seem trivial. This is however far from true as the trained model is not directly used for inference but rather modified, optimized, and simplified based on where it is being deployed. Optimizations depend on performance and efficiency requirements along with the compute, memory, and latency considerations. The diversity of data and the scale of AI models continues to grow with the proliferation of AI applications across domains and use cases including in vision, speech, recommender systems, and time series applications. Trained models today can be large and complex with hundreds of layers and billions or even trillions of parameters. The inference use case, however, might require that the model still have low latency (ex: automotive applications) or run in a power-constrained environment (ex: battery operated robots). This necessitates the simplification of the trained models even at a slight cost to prediction accuracy. A couple of popular methods for optimizing a trained model, without significant accuracy losses, are pruning and quantization. Pruning refers to eliminating the least significant model weights that have minimal contribution to the final results across a wide array of inputs. Quantization on the other hand involves reducing the numerical precision of the weights for example from 32-bit float to 8-bit integer. Intel AI hardware architectures and AI software tools provide you with everything you need to optimize your AI inference workflow. Accelerating AI Inference: Hardware The different stages of the AI workflow typically have different memory, compute, and latency requirements. Data engineering has the highest memory requirements so that large datasets can fully fit into systems for efficient preprocessing, considerably shortening the time required to sort, filter, label, and transform your data. Training is usually the most computationally intense stage of the workflow and typically requires several hours or more to complete based on the size of the dataset. Inference on the other end has the most stringent latency requirement, often requiring results in milliseconds or less. A point of note here is that while the computing intensity of inference is much lower than that of training, inference is often done on a much larger dataset leading to the use of greater total computing resources for inference vs training. From hardware that excels at training large, unstructured data sets, to low-power silicon for optimized on-device inference, Intel AI supports cloud service providers, enterprises, and research teams with a portfolio of versatile, purpose-built, customizable, and application-specific AI hardware that turns artificial intelligence into reality. The role of CPUS in AI The Intel® Xeon® Scalable processor , with its unparalleled general purpose programmability, is the most widely used server platform from cloud to the edge for AI. CPUs are extensively used in the data engineering and inference stages while training uses a more diverse mix of GPUs and AI accelerators in addition to CPUs. GPUs have their place in the AI toolbox, and Intel is developing a GPU family based on our Xe architecture. CPUs, however, remain optimal for most ML inference needs, and we are also leading the industry in driving technology innovation to accelerate inference performance on the industry’s most widely used CPUs. We continue expanding the built-in acceleration capabilities of Intel® DL Boost in Intel® Xeon® scalable processors. Based on Intel® Advanced Vector Extensions 512 (Intel® AVX-512), Intel® DL Boost Vector Neural Network Instructions (VNNI) delivers a significant performance improvement by combining three instructions into one — thereby maximizing the use of compute resources, utilizing the cache better, and avoiding potential bandwidth bottlenecks. Most recently, we announced Intel® AMX (Intel® Advanced Matrix Extensions), an extensible accelerator architecture in the upcoming Sapphire Rapids CPUs, which enables higher machine learning compute performance for both training and inference by providing a matrix math overlay for the AVX-512 vector math units. Accelerating AI Inference: Software Intel complements the AI acceleration capabilities built into our hardware architectures with optimized versions of popular AI frameworks and a rich suite of libraries and tools for end-to-end AI development, including for inference. All major AI frameworks for deep learning (such as TensorFlow, PyTorch, MXNet, and Paddle Paddle) and classical machine learning (such as Scikit-learn and XGBoost) have been optimized by using oneAPI libraries ( oneAPI is a standards-based, unified programming model that delivers a common developer experience across diverse hardware architectures) that provide optimal performance across Intel® CPUs and XPUs. These Intel software optimizations , referred to as Software AI Accelerators , help deliver orders of magnitude performance gains over stock implementations of the same frameworks. As a framework user, you can reap all performance and productivity benefits through drop-in acceleration without the need to learn new APIs or low-level foundational libraries. Along with developing Intel-optimized distributions for leading AI frameworks, Intel also up-streams our optimizations into the main versions of these frameworks, helping deliver maximum performance and productivity to your inference applications when using default versions of these frameworks. Deep neural networks (DNNs) show state-of-the-art accuracy for a wide range of computation tasks but still face challenges during inference deployment due to their high computational complexity. A potential alleviating solution is low precision optimization. With hardware acceleration support, low precision inference can compute more operations per second, reduce the memory access pressure, and better utilize the cache to deliver higher throughput and lower latency. Intel Neural Compressor The Intel® Neural Compressor tool aims to help practitioners easily and quickly deploy low-precision inference solutions on many of the popular deep learning frameworks including TensorFlow, PyTorch, MXNet, and ONNX runtime. Unified APIs are provided for neural network compression technologies such as low precision quantization, sparsity, pruning, and knowledge distillation. It implements the unified low-precision inference APIs with mixed precision, easy extensibility, and automatic accuracy-driven tuning while being optimized for performance, model size, and memory footprint. Fig 2: Intel® Neural Compressor Infrastructure Transformers are deep learning models that are increasingly used for Natural Language Processing (NLP). Alibaba’s end-to-end Machine Learning Platform for AI (PAI) uses Intel-optimized PyTorch transformers for processing real world processing tasks for their millions of users. Low latency and high throughput are keys to a Transformer model’s success, and 8-bit low precision is a promising technique to meet such requirements. Intel® DL Boost offers powerful capabilities for 8-bit low precision inference on AI workloads. With the support of Intel® Neural Compressor (previously called the Intel® Low Precision Optimization Tool), we can optimize 8-bit inference performance while significantly reducing accuracy loss. You can read more about the partnership with Alibaba and how Intel’s latest CPUs and the Intel Neural Compressor tool helped bring up to a 3x performance boost on the Alibaba PAI blade inference toolkit here. Intel® Neural Compressor is also an integral part of the Optimum ML Optimization Toolkit from HuggingFace which aims to enable maximum efficiency and production performance to run Transformer models. The Intel® Neural Compressor makes models faster with minimal impact on accuracy, leveraging post-training quantization, quantization-aware training and dynamic quantization. It also helps make them smaller with minimal impact on accuracy, with easy to use configurations to remove model weights. Read more about how one can quantize the BERT model for Intel® Xeon® CPUs here. Intel® Neural Compressor is available as part of the Intel® oneAPI AI Analytics Toolkit , which provides high-performance APIs and Python packages to accelerate end-to-end machine-learning and data-science pipelines, or as a stand-alone component. Intel® Distribution of OpenVINO™ Toolkit The Intel ® Distribution of OpenVINO ™ toolkit enables practitioners to optimize, tune, and run comprehensive AI inference using an included model optimizer and runtime and development tools. It supports many of the popular AI frameworks including Tensorflow, ONNX, PyTorch, and Keras, and allows for deployment of applications across combinations of accelerators and environments including CPUs, GPUs, and VPUs, and from the edge to the cloud. Developers can explore over 350 pre-trained models that are optimized and hosted on the Open Model Zoo repository including popular models such as YOLO and Mobilenet-SSD for object detection which are optimized with the post-training optimization tool and the performance is benchmarked. Also included are several state-of-the-art models for pose estimation, action recognition, text spotting, pedestrian tracking, scene and object segmentation that can be easily downloaded for immediate use. To try it, developers can use OpenVINO Notebooks that install OpenVINO locally for rapid prototyping and validating their work loads. You can get started with just your laptop and get a real-time performance boost from our optimized models in less than 15 minutes! AI Inference Application demo — Neural Style Transfer Hopefully our discussion today has helped you get a better sense of the Inference stage of the AI workflow, its importance and applications, and how it can be accelerated through both AI-optimized hardware architectures and software tools. Something that has always helped me crystallize concepts is using them in hands-on applications. As mentioned earlier, I love AI and I love to paint. I want to leave you with a quick demo on Neural Style Transfer where I use Intel® CPUs and Intel-optimized TensorFlow to transform my paintings into different styles ranging from Van Gogh’s Starry Night to a design of an Intel Chip and many more! Fig 2: Neural Style Transfer Neural Style Transfer is an AI optimization technique that combines your original image with the artistic style of a reference image. Here is a link to all the files, including code and images, that you will need to run your own Neural Style Transfer experiment along with a short video that walks you through all of the steps. Learn More: Intel AI , Software AI Accelerators , Scaling AI from Pilot to Production Huma Abidi is Senior Director, AI Software Products and Engineering at Intel. Chandan Damannagari is Director, AI Software, at Intel. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,018
2,021
"Nvidia unveils Grace ARM-based CPU for giant-scale AI and HPC apps | VentureBeat"
"https://venturebeat.com/2021/04/12/nvidia-unveils-grace-arm-based-cpu-for-giant-scale-ai-and-hpc-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia unveils Grace ARM-based CPU for giant-scale AI and HPC apps Share on Facebook Share on X Share on LinkedIn Nvidia's Grace CPU for datacenters is named after Grace Hopper. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nvidia unveiled its Grace processor today. It’s an ARM-based central processing unit (CPU) for giant-scale artificial intelligence and high-performance computing applications. It’s Nvidia’s first datacenter CPU, purpose-built for applications that are operating on a giant scale, Nvidia CEO Jensen Huang said in a keynote speech at Nvidia’s GTC 2021 event. Grace delivers 10 times the performance leap for systems training giant AI models, using energy-efficient ARM cores. And Nvidia said the Swiss Supercomputing Center and the U.S. Department of Energy’s Los Alamos National Laboratory will be the first to use Grace, which is named for Grace Hopper , who pioneered computer programming in the 1950s. The CPU is expected to be available in early 2023. “Grace is a breakthrough CPU. It’s purpose-built for accelerated computing applications of giant scale for AI and HPC,” said Paresh Kharya, senior director of product management and marketing at Nvidia, in a press briefing. Huang said, “It’s the world’s first CPU designed for terabyte scale computing.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Grace is the result of 10,000 engineering years of work. The CPU is the result of more than 10,000 engineering years of work. Nvidia said the chip will address the computing requirements for the world’s most advanced applications — including natural language processing, recommender systems, and AI supercomputing — that analyze enormous datasets requiring both ultra-fast compute performance and massive memory. Grace combines energy-efficient ARM CPU cores with an innovative low-power memory subsystem to deliver high performance with great efficiency. The chip will use a future ARM core dubbed Neoverse. “Leading-edge AI and data science are pushing today’s computer architecture beyond its limits — processing unthinkable amounts of data,” Huang said in his speech. “Using licensed ARM IP, Nvidia has designed Grace as a CPU specifically for giant-scale AI and HPC. Coupled with the GPU and DPU, Grace gives us the third foundational technology for computing and the ability to re-architect the datacenter to advance AI. Nvidia is now a three-chip company.” Grace is a highly specialized processor targeting workloads such as training next-generation NLP models that have more than 1 trillion parameters. When tightly coupled with Nvidia GPUs, a Grace-based system will deliver 10 times faster performance than today’s Nvidia DGX-based systems, which run on x86 CPUs. In a press briefing, someone asked if Nvidia will compete with x86 chips from Intel and AMD. Kharya said, “We are not competing with x86 … we continue to work very well with x86 CPUs.” Above: The Alps supercomputer will use Grace CPUs from Nvidia. Grace is designed for AI and HPC applications, but Nvidia isn’t disclosing additional information about where Grace will be used today. Nvidia also declined to disclose the number of transistors in the Grace chip. Nvidia is introducing Grace as the volume of data and size of AI models grow exponentially. Today’s largest AI models include billions of parameters and are doubling every two and a half months. Training them requires a new CPU that can be tightly coupled with a GPU to eliminate system bottlenecks. “The biggest announcement of GTC 21 was Grace, a tightly integrated CPU for over a trillion parameter AI models,” said Patrick Moorhead, an analyst at Moor Insights & Strategies. “It’s hard to address those with classic x86 CPUs and GPUs connected over PCIe. Grace is focused on IO and memory bandwidth, shares main memory with the GPU and shouldn’t be confused with general purpose datacenter CPUs from AMD or Intel.” Underlying Grace’s performance is 4th-gen Nvidia NVLink interconnect technology, which provides 900 gigabyte-per-second connections between Grace and Nvidia graphics processing units (GPUs) to enable 30 times higher aggregate bandwidth compared to today’s leading servers. Grace will also utilize an innovative LPDDR5x memory subsystem that will deliver twice the bandwidth and 10 times better energy efficiency compared with DDR4 memory. In addition, the new architecture provides unified cache coherence with a single memory address space, combining system and HBM GPU memory to simplify programmability. “The Grace platform and its Arm CPU is a big new step for Nvidia,” said Kevin Krewell, an analyst at Tirias Research, in an email. “The new design of one custom CPU attached to the GPU with coherent NVlinks is Nvidia’s new design to scale to ultra-large AI models that now take days to run. The key to Grace is that using the custom Arm CPU, it will be possible to scale to large LPDDR5 DRAM arrays far larger than possible with high-bandwidth memory directly attached to the GPUs.” Above: The Los Alamos National Laboratory will use Grace CPUs. Grace will power the world’s fastest supercomputer for the Swiss organization. Dubbed Alps, the machine will feature 20 exaflops of AI processing. (This refers to the amount of computing available for AI applications.) That’s about 7 times more computation than is available with the 2.8-exaflop Nvidia Seline supercomputer, the leading AI supercomputer today. HP Enterprise will be building the Alps system. Alps will work on problems in areas ranging from climate and weather to materials sciences, astrophysics, computational fluid dynamics, life sciences, molecular dynamics, quantum chemistry, and particle physics, as well as domains like economics and social sciences, and will come online in 2023. Alps will do quantum chemistry and physics calculations for the Hadron collider, as well as weather models. Above: Jensen Huang, CEO of Nvidia, at GTC 21. “This is a very balanced architecture with Grace and a future Nvidia GPU, which we have not announced yet, to enable breakthrough research on a wide range of fields,” Kharya said. Meanwhile, Nvidia also said that it would make its graphics chips available with Amazon Web Services’ Graviton2 ARM-based CPU for datacenters for cloud computing. With Grace, Nvidia will embark on a mult-year pattern of creating graphics processing units, CPUs, and data processing units (CPUs), and it will alternate between Arm and x86 architecture designs, Huang said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,019
2,021
"Nvidia launches Modulus, a framework for developing 'physics-grounded' AI models | VentureBeat"
"https://venturebeat.com/2021/11/09/nvidia-launches-modulus-a-framework-for-developing-physics-grounded-ai-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia launches Modulus, a framework for developing ‘physics-grounded’ AI models Share on Facebook Share on X Share on LinkedIn A Nvidia logo seen displayed on a smartphone Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nvidia today launched Modulus, a framework for developing “physics-grounded” machine learning models in industries that require a high level of physical accuracy. Modulus trains AI systems to use the laws of physics to model the behavior of systems in a range of fields, according to Nvidia, including climate science and protein engineering. “Digital twin” approaches to simulation have gained currency in many domains. For instance, London-based SenSat helps clients in construction, mining, energy, and other industries create models of locations for projects they’re working on. GE offers technology that allows companies to model digital twins of actual machines and closely track performance. And Microsoft provides Azure Digital Twins and Project Bonsai, which model the relationships and interactions between people, places, and devices in simulated environments. Gartner predicted that 50% of large manufacturers would have had at least one digital twin initiative launched by 2020, and that the number of organizations using digital twins would triple from 2018 to 2022. Markets and Markets estimates that the global market for digital twin technologies will reach $48.2 billion by 2026, up from $3.1 billion in 2020. Above: A physics simulation running with Nvidia Modulus. “Digital twins have emerged as powerful tools for tackling problems ranging from the molecular level like drug discovery up to global challenges like climate change,” Nvidia senior product marketing manager Jay Gould said in a blog post. “Modulus gives scientists a [toolkit] to build highly accurate digital reproductions of complex and dynamic systems that will enable the next generation of breakthroughs across a vast range of industries.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Physics framework Nvidia describes Modulus — which was announced during the company’s fall 2021 GPU Technology Conference (GTC) — as a framework to provide engineers, scientists, and researchers tools to build AI models of digital twins. As in most AI-based approaches, Modulus includes a data prep module that helps manage observed or simulated data, accounting for the geometry of the systems it models and the explicit parameters of the space represented by the input geometry. Modulus includes a sampling planner that enables users to select an approach, such as quasi-random sampling or importance sampling, to improve the model’s accuracy. The framework also ships with APIs to take symbolic governing partial differential equations and build physics models, as well as curated layers and network architectures tailored for physics-based problems. In addition, Modulus offers a “physics-machine learning” engine that takes inputs to train models using machine learning frameworks including Facebook’s PyTorch and Google’s TensorFlow. The toolkit’s TensorFlow-based implementation optimizes performance by taking advantage of XLA, a domain-specific compiler for linear algebra that accelerates TensorFlow models. Leveraging the Horovod distributed deep learning training framework for multi-GPU scaling, Modulus can perform inference in near-real-time or interactively once a model is trained. Modulus includes tutorials for getting started with computational fluid dynamics, heat transfer, modeling turbulence, transient wave equations, and other multiphysics problems. It’s available now as a free download through the Nvidia Developer Zone. “The GPU-accelerated toolkit offers rapid turnaround complementing traditional analysis, enabling faster insights. Modulus allows users to explore different configurations and scenarios of a system by assessing the impact of changing its parameters,” Gould wrote. “Modulus is customizable and easy to adopt. It offers APIs for implementing new physics and geometry. It’s designed so those just starting with AI-driven digital twin applications can put it to work fast.” Companies including Alphabet’s DeepMind have investigated applying AI systems to physics simulations. Last April, DeepMind described a model that predicts the movement of glass molecules as they transition between liquid and solid states. Beyond glass, the researchers asserted that the work could lead to advances in industries like manufacturing and medicine, including dissolvable glass structures for drug delivery and renewable polymers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,020
2,022
"Why synthetic data may be better than the real thing | VentureBeat"
"https://venturebeat.com/2022/04/05/why-synthetic-data-may-be-better-than-the-real-thing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why synthetic data may be better than the real thing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. To deploy successful AI, organizations need data to train models. That said, high-quality data isn’t always easy to access – creating a major hurdle for organizations in launching AI initiatives. This is where synthetic data can be so useful. As opposed to data that is collected from and measured in the real world, synthetic data is generated in the digital world by computer simulations, algorithms, simple rules, statistical modeling, simulation, and other techniques. It is an alternative to real-world data, but it reflects real-world data, mathematically and statistically. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Some experts even contend that synthetic data is better than real-world people, places, and things when it comes to training AI models. Constraints in using sensitive and regulated data are removed or reduced; datasets can be tailored to certain conditions that might otherwise be unobtainable; insights can be gained much more quickly; and training is less cumbersome and much more effective. To that point, Gartner projects synthetic data to completely overshadow real data in AL models by 2030. “The fact is you won’t be able to build high-quality, high-value AI models without synthetic data,” according to the Gartner report. Leaders in synthetic data To support accelerating demand, a growing number of companies are offering synthetic models – top and emerging companies in the space include Mostly AI, AI.Reverie, Sky Engine, and Datagen. Leading data engineering company Innodata has also entered the market, today launching an e-commerce portal where customers can purchase on-demand synthetic datasets and immediately train models. “The kind of datasets we’re going after reflect real-world problems that CIOs and customers have come back to us with,” said CPO Rahul Singhal. “We began looking at: How do we create large amounts of training data that machines need?” The Innodata AI Data Marketplace has been developed by in-house experts specifically for building and training AI/ML models. The data packs are off-the-shelf, easily previewable, unbiased, diverse, thorough, and secure, according to Singhal. Innodata is initially releasing 17 data packs in four languages that home in on financial services. These packs are textual, meaning they include invoices, purchase orders, and banking and credit card statements. “One of the big needs in AI is diversity of data,” said Singhal. “We need lots of diverse ways that invoice can be created, we need visibility. It seems very easy, but it’s actually really complicated.” The marketplace compliments Innodata’s open-source repository of more than 4,000 datasets. These help in the prototyping of supervised and unsupervised ML projects. The new synthetic datasets take that to the next level based on real-world information. “Machines learn by seeing real-world examples,” Singhal said. For instance, he pointed to the many ways in which a credit card statement could be structured – one could have names listed on the right side; another on the left; one could use a table format; another a column format. To be accurate, machines have to be provided with those variations, and in both quality and quantity. Innodata models have been provided with hundreds of templates to allow for such variations and to replicate true scenarios. “Machine learning (ML) depends on a diversity of datasets,” Singhal said. “We create real-world data sets as much as possible and replicate what real-world document types will look like.” Why synthetic data ? Among their many advantages, synthetic datasets are free from personal data and therefore not subject to compliance restrictions or other privacy protection laws, Singhal pointed out. This also shields against security breaches. Biases are removed to help automate workflows and enable predictive modeling. Singhal pointed out that, “things in the real world are not pristine,” and that people can smudge banking statements or accidentally or purposely obfuscate things. Ultimately, synthetic data will be an important tool in driving the adoption of AI, Singhal said. The eventual intent with Innodata’s marketplace is to expand to third-party AI training data sets, as well as beyond documents to images, video, audio, and speech (the latter in response to the growth in conversational AI). These datasets will also span industries – telecom and utilities, transportation and logistics, energy services, pharmaceuticals, hospitality, insurance, retail, healthcare – and will be provided in an expanding number of languages so that data scientists can build from a global perspective. “Our goal is to create a vibrant marketplace where companies can contribute datasets and monetize data sets,” Singhal said. “This has the potential of democratizing data for AI.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,021
2,021
"How to use the right data for growth marketing in a privacy-centric world | VentureBeat"
"https://venturebeat.com/2021/11/30/how-to-use-the-right-data-for-growth-marketing-in-a-privacy-centric-world"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How to use the right data for growth marketing in a privacy-centric world Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Jonathan Martinez Data has always been a critical component of the craft of growth marketing. From being able to pinpoint the time of day when users convert best on Facebook (or as the company is now known, Meta ), to testing the best email subject lines (and the ongoing debate of whether to use emojis), to hyper-analyzing each step of the product funnel to eliminate drop-off. This is all changing because of our shift to a privacy-centric world. Companies like Apple are now enforcing stricter user-level data sharing. States are increasingly looking towards privacy laws like the California Consumer Privacy Act (CCPA), which gives consumers more control over their personal information which businesses can collect. Due to this ever-changing landscape, data that growth marketers have traditionally used to make optimization decisions has become increasingly inaccessible. Solutions are needed now that we have far fewer data to work with. Without the critical data component of growth marketing, startups and corporations are equally impacted. So, how do we adapt to an environment which is far less data-rich than before? Data degradation ​​First, we need to take a step back to a time before recent privacy changes to understand why this is a transitional time for growth marketing. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! We’ve largely lived in a world where access to user-level data has been abundant and rich. But being able to analyze conversions to this level of specificity: e.g., a 34-year-old male in San Francisco in the top 10% HHI, is swiftly coming to an end. Governments and companies are slowly but concretely making changes to the types of data that ad platforms can send and receive. In one of the latest examples of this industry shift , Apple forced app developers to display prompts asking users whether they’d like to share their data with 3rd parties in iOS14. According to Statista , the opt-in rate for this specific prompt is 21% as of September 6, 2021, which is unsurprising. This means that 79% of users are unwilling to provide their data to ad platforms in this instance, who then can’t pass the data back to growth marketers. It’s a trickle-down effect for growth marketers, who have historically relied on this information to make data-driven decisions. Incrementality testing So, how can growth marketers solve for not having as much data at their disposal? A concept which I’ve always considered vital for growth marketing must now become ubiquitous for every business. The most accurate way to measure and understand marketing efforts is by conducting incrementality tests. The easiest way to explain incrementality testing is that tests show the true lift in conversions from marketing efforts. For example, how many users would have converted last week if a specific marketing campaign were turned off? This type of testing allows us to determine how our efforts are impacting consumer behavior. There are multiple ways to set these tests up, from hacky to precision accuracy. For the sake of keeping this discussion relevant to startups without necessitating data science swarms, the following approach will describe a simplified method that will still get reliable results. Above: Example illustration of test tubes The test tubes in the illustration above provide a visual representation of an incrementality test. Both tubes show the number of conversions (i.e., purchases) that a company acquires when a Facebook campaign is turned on or off. The first test tube shows the Facebook campaign on, and the second test tube shows the Facebook campaign off. We can observe that the first tube had approximately 40% more conversions (or green fill), which would be the “incremental” lift in conversions. These are the conversions that resulted because the Facebook campaign was on. So, how do we go about setting up this incrementality test? Data for growth marketing: an incrementality test example Now that we know the basics and reasoning for incrementality, let’s dive into an example test. In this test, we’ll determine how incremental a Facebook campaign might be. I like to call the setup for this test, “the light switch method”. It involves measuring conversions when the campaign is live, switching the campaign off, and then measuring the conversions again. During testing While the test is running, it’s imperative that no changes are made to the campaign or other channels and initiatives which may be live. If you’re running this test, and then launch something like a free one-month promotion, the conversions would likely spike and invalidate the data. This method relies on keeping everything consistent throughout the testing period, and across your growth marketing efforts. Leveraging results Above: Simple incrementality example analysis. In the example above, the Facebook campaign was live between January 8 to January 14, which resulted in 150 sign-ups. The campaign was then switched off between January 15 to 24, and 50 sign-ups still occurred within this second period. Cost/sign up: $16.66 Cost/incremental sign up: $50.00 These results tell us our Facebook campaign is 200% incremental. It’s a simple example, but this test provides us with our incremental costs, which we can now apply and compare against other channels. Although there is less user-level data to optimize Facebook campaigns, incrementality tests are still a powerful way to understand the effectiveness of the marketing spend. The power of scalar utilization As times change and with politicians continuing to enact legislation forcing company action, I believe we’re moving towards an incremental and scalar-based attribution model in growth marketing. Last-click attribution will be a concept of the past, as this relies heavily on user-level data. A scalar, as defined by Encyclopedia Britannica, is “…a physical quantity which is completely described by its magnitude”. In growth marketing, the use of scalars helps increase or decrease a channel or growth medium’s metrics, based on tests or historical data. There is a myriad of uses for scalars, but let’s analyze a timely example. With the introduction of iOS14 and Apple’s SKAD timer effectively cutting all attribution data after a 24-hour window, app-first companies have scrambled for solutions to mitigate this loss of information. This is a perfect use case for implementing a scalar. How does one calculate, analyze and implement a scalar to get necessary data? Take the following example, using Google UAC efforts, which have been impacted because of Apple’s iOS14 SKAD timer. Pre-iOS14, the sign-up> purchase CVR was 32%. With the introduction of iOS14+, the CVR has now dropped to 8%. This is a huge drop, especially considering nothing else has changed in the app flow or channel tactics. Above: Example data for Google UAC campaign pre-scalar. The CACs increased from $125 to $526, which would make this channel seem inefficient, potentially leading to reduced budgets. But instead of outright pausing all iOS Google UAC campaigns, a scalar should be applied to account for data loss. Above: Example data for Google UAC campaign post-scalar. We can divide 80/19, which is the delta in purchases pre-and-post iOS14. We land with a scalar of ~4.21, which we can then multiply our post-iOS14 purchases by, or 19 x 4.21 = 79.99. Our sign-up> purchase CVR is now normalized back at 32%, which is what consumer behavior typically is on our app. There are other ways to implement scalars for growth marketing efforts, including leveraging historical data to inform scalars or using data sources that haven’t been impacted — e.g., Android campaigns. By using this method to make metrics more accurate, it helps prevent a lights-out scenario for seemingly low-performing channels. Looking ahead at the future of data for growth marketing Many see the degradation of data as a doomsday event. But I see it as an opportunity to become more innovative and to move ahead of the competition. Leveraging incrementality tests and scalars are just two of the strategies that focus on data for growth marketing, which can and should be employed to validate growth marketing spend. If anything, this increasingly privacy-centric era will force us to realize the true impact of our data and growth marketing efforts like never before. As we look towards 2022, policies will continue to become enforced by governments and corporations alike. Google has already made it known that 3rd party cookies will become obsolete on Chrome in 2022-2023. They will likely also follow in the footsteps of Apple’s iOS enforcement. But as new policies are enforced, new strategies will be equally needed, and tools from mobile measurement partners or business intelligence platforms should be leveraged. Take this new era in growth marketing to get crafty, because those who do, will eventually end up on top. Jonathan is a former YouTuber, UC Berkeley alum, and growth marketing nerd who’s helped scale Uber, Postmates, Chime, and various startups. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,022
2,022
"Digitization could drive manufacturing beyond Industry 4.0 | VentureBeat"
"https://venturebeat.com/2022/03/12/digitization-could-drive-manufacturing-beyond-industry-4-0"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Digitization could drive manufacturing beyond Industry 4.0 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. By Sef Tuma, global lead for Engineering & Manufacturing at Accenture Industry X The Fourth Industrial Revolution is outpacing Industry 4.0. What looks like a paradox actually isn’t, as the two things aren’t the same. The term “Industry 4.0” typically means digital technologies, like the internet of things, artificial intelligence and big data analytics, applied in factories and plants to make the business more efficient and effective. The Fourth Industrial Revolution goes beyond that. It implies significant shifts driven by these technologies and their usage – new ways of working , communicating and doing business. Just consider how significantly smartphones, social media, video conferencing and ride-sharing platforms have changed our work and private lives. Digital in manufacturing is still a mixed bag Has manufacturing witnessed this kind of fundamental change over the past decade? Many companies are definitely experimenting with the disruptive potential of Industry 4.0 technologies. Take industrial equipment maker Biesse , which now sells production machines that send data to a digital platform, predicting machine failure and deploying maintenance crews. Or Volkswagen , which used AI-powered generative design to reconceptualize its iconic 1962 Microbus to be lighter and greener, ultimately creating parts that were lighter and stronger and reducing the time spent getting from development to manufacturing from a 1.5-year cycle to a few months. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The other side of the coin: There’s a lot of digital white space in manufacturing. Compared to other parts of the enterprise, like marketing, sales and administrative processes, manufacturing is far from being as digital as it could be. A survey revealed that, in 2020, only 38% of companies had deployed at least one project to digitize their production processes. According to another study from the same year, most companies were still somewhere between piloting digital capabilities in one factory or plant and deploying these pilots to other sites. This hardly paints the picture of a revolution. However, change is underway. Three developments are driving manufacturers toward a tipping point Companies across the globe see and act upon the need for compressed transformation to remain relevant while becoming more resilient and responsible. This includes the transformation of a core piece of their business – manufacturing. Three burning platforms are driving them toward the next digital frontier: 1. The ongoing pandemic is accelerating change. The pandemic has accelerated the adoption and implementation of digital technologies in manufacturing, as it shed an unflattering light on the digitization gaps. Many companies had to shut down production because they couldn’t run their factories remotely or couldn’t adjust their production lines to supply and demand that changed overnight. To maintain social distancing in the workplace, companies introduced intelligent digital workers solutions to ensure their workers could maintain production lines, whilst rallying around the critical purpose of protecting employees. During this shift, forty-eight % of organizations invested in cloud-enabled tools and technologies and 47 % in digital collaboration tools to support their remote workforce, according to an Accenture survey. The pandemic also created a need for more agile manufacturing than ever before. Many companies united on the shared purpose of aiding the front line. Pivoting factory production from alcohol to hand sanitizer or fashion to PPE is no simple task. Still, these businesses transformed almost overnight with the right data, connectivityand intelligent machines. 2. Software redefines physical products. Whether it’s cars, medical devices or even elevators – physical products that used to be relatively dumb are becoming even smarter. Some are even becoming intelligent. What now defines many tools, devices and machines aren’t nuts and bolts but bits and bytes. Software enables and controls their functionality and features. Already in 2018, 98 % of manufacturers had started integrating AI in their products. In 2020, 49 % of companies reported that more than half of their products and services require subsequent software updates. And by 2025, there could be more than 27 billion connected devices generating, sending and computing information all over the planet. Consequently, making a successful product has become a primarily digital job, but that doesn’t mean the mechanical and physical requirements have become obsolete. In many areas, the look and feel of things are likely to remain the decisive factor for customers and consumers. And while a few people may see advantages in eating with intelligent forks and wearing smart socks, in all likelihood, those will remain a minority. A significant and growing number of ‘things’ in manufacturing, however, are already being designed and engineered from their digital features. It means a massive change in the engineering process and skills required. It also means: Manufacturers need to become software-savvy. Relying on their traditional competitive advantages isn’t enough. They need to keep and strengthen those and add software expertise to the mix. 3. The sustainability imperative depends on digital. Stakeholders are increasingly demanding companies to make more sustainable things, in a more sustainable manner. Investors’ appetite for so-called impact investing—seeking to generate a positive impact for society along with strong financial returns—is growing and could total as much as US$26 trillion. Regulators are demanding greater sustainability commitments as well, for example, the European Commission whose Sustainable Products Initiative will ban the destruction of unsold durable goods and restrict single-use products. And consumers are willing to pay for sustainable products, with products marked as “sustainable” growing 5.6x faster than conventionally marketed products. This pressure to become more sustainable will be a crucial digitization driver in manufacturing. For example, 71% of CEOs say that real-time track-and-trace of materials or goods will significantly impact sustainability in their industry over the next five years, according to the United Nations Global Compact 2021 study. Digital twins will also play a pivotal role supporting sustainability efforts. These data-driven simulations of real-world things and processes can reduce the equivalent of 7.5Gt of carbon dioxide emissions by 2030, research shows. Johnson Controls , a global leader in smart and sustainable building technologies, has partnered with Dubai Electricity and Water Authority and Microsoft on the implementation of Al Shera’a, the smartest net zero-energy government building in the world. Through digital twins, AI and smart building management solutions, the building’s total annual energy use is expected to be equal to or less than the energy produced on-site. Two crucial steps will help manufacturers achieve their next digital frontier All three developments are landmarks of the next digital frontier ahead for most manufacturers. They pose significant challenges to how customer and employee-relevant manufacturers will remain, how resilient they will be and how responsibly they can act. They should address these challenges by focusing their efforts on two things: 1. Don’t stop at implementing technology – connect it intelligently. As described at the outset, Industry 4.0 and the fourth industrial revolution aren’t the same. To foster meaningful change, companies need to connect Industry 4.0 technologies in a way that allows them to see much clearer and farther ahead – allowing them to act and react much quicker according to what they see. For example, cloud platforms to share and process data; machine learning algorithms to analyze this data and build various scenarios and digital twins to experiment with these data-driven scenarios. If connected intelligently to act in concert, the technologies form a digital thread, enabling information to flow between people, products, processes and plants, running all the way from a company’s research and product development to factory floors, supply chains, consumers and back again. This thread makes the product development, production process, market demands and customer behavior more visible and transparent. One can picture it as a virtuous loop of digital copies of every aspect of the product development, engineering and production process – allowing companies to predict, monitor and address the consequences of almost every action. 2. Don’t expect change to happen. Manage it wisely. The people agenda is as important as the technology agenda, perhaps even more so. Digital means new ways of working, just like the steam engine and conveyor belt did. As more and newer technologies enter the workplace, traditional roles will move from executing manual tasks to monitoring, interpreting and guiding intelligent machines and data. This means jobs will require more innovation, creativity, collaboration and leadership. Companies that don’t recognize this and act on it are in for a disappointment. For example, in a 2020 survey , 63% of companies admitted that they had failed to capture the expected value from their cloud investments. The major roadblocks of their cloud journey proved to be the people and change dimensions. Similarly, only 38% of supply chain executives felt that their workforce was mostly ready or completely ready to leverage the technology tools provided to them. Manufacturing is lagging when it comes to digitization — as a sector and within the enterprise. But more and more companies have come to realize that manufacturing is their next digital frontier and are focusing their efforts on this core part of the enterprise. The technologies are available and have proven their worth and both the need and the benefits of digital manufacturing are obvious. Companies that connect technology intelligently and manage the change it brings wisely can go well beyond the efficiency and effectiveness scenarios that Industry 4.0 provides. Sef Tuma is global lead for Engineering & Manufacturing at Accenture Industry X. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,023
2,022
"How the metaverse could remake manufacturing | VentureBeat"
"https://venturebeat.com/2022/04/26/how-the-metaverse-could-remake-manufacturing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How the metaverse could remake manufacturing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Unbridled hype and early flirtations with the metaverse have prompted industry, public thinkers, and everyday web users to speculate about its potential to reshape the internet, business, and our social lives. Perhaps the most complete and influential mapping of what the metaverse will be was authored by strategist, advisor and essayist, Matthew Ball, two years ago. He wrote that the metaverse will be persistent, synchronous, interoperable, connective of the digital and physical worlds, and defined by content from a wide range of contributors. Predicting the evolution and impact of a change so broad and complex is assuredly difficult, but it can be instructive to examine the potential influence of the metaverse through narrower sectors. The manufacturing sector, for example, could see new opportunities for innovation by leveraging the metaverse’s capabilities. Let’s take a look at how that could happen. Accelerating the innovation cycle One of the most straightforward ways the metaverse is likely to change manufacturing is by significantly accelerating the process of prototyping. Unlike real world prototypes that may require significant customization of manufacturing, prototypes in the metaverse could be built quickly by taking advantage of the rendering capability of virtual engines like Unity and functionality from rich simulations built off the logic of tools such as digital twins. This change in prototyping could lead to innovations in process and could result in new types of products, ultimately leading to more options for customers and shorter timelines from conception to creation. For example, an automaker could use digital twins of its real-world assembly along with digital prototyping to explore ways to optimize the manufacturing process while tweaking the designs of its cars. Or it could use digital prototyping to customize processes so that a wider menu of product variations can be produced. Automakers are already moving in this direction. To optimize manufacturing processes and operations, Hyundai and BMW have created virtual twins of production plants. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Digital twins could also be used to better understand the customizations that would be required to modify a vehicle for different use cases, for example an automaker could learn from digital twins in the construction industry to develop and produce improved vehicles for that industry. A fertile ground for product testing In addition to enabling rapid prototyping, the metaverse should prove a robust environment for testing consumer preferences. By enabling customers to interact with digital prototypes, companies can gain valuable insights into what customers want, influencing product development and increasing opportunities for collaboration and customization. The result could be that brands offer more diverse lines of products. High fidelity virtual test products in the metaverse can unlock rich customer insight and allow brands to scale user testing along various models. The decentralized nature of the metaverse could enable democratic and diverse market research. Conversely, brands could take the opposite approach and reward high-value customers with exclusive opportunities to give input into product design. Brands could even monetize the innovation process by selling memberships that offer advanced access prototypes and opportunities to influence development. In the home, consider a new smart fridge maker that wants to gain insight about the interface of its forthcoming product. By restricting permissions to a select group of testers, the brand could roll out new products virtually to users identified as likely early adopters. Down the line, such exercises could include testing appliances delivering 3-D printed food and additional robotic functionality in the kitchen. Decentralized, democratized, and transparent production The metaverse could also lead to democratization, decentralization, and increased transparency within the manufacturing sector across industries. Digital-only or digital-first products will require their own manufacturing teams and processes. These teams can be decentralized and global, comprised of extremely diverse types of collaborators and even paired with after-market innovators, who will also be empowered by the metaverse to play an increasing role in product modification. Both product development workers and consumers should experience increased project visibility from the metaverse. With a more complete view of the project cycle, customers could track the production of their products from the sourcing of raw materials to delivery. Such transparency could lead to increased demand for ethical business practices throughout the product cycle. Connecting the digital and physical worlds Finally, the metaverse will connect the digital and physical worlds in ways that will prompt new fields of innovation, business models, and demands for manufacturing. For example, in the world of fashion, trends could emerge digitally — on avatars — and then translate into the real-world, requiring physical production. Brands looking to capitalize on such a trend would need reactive and nimble manufacturing teams and processes to take advantage of such opportunities. In an example of merging the digital and physical, Nike recently acquired RTFKT , which makes way for digital and real-world connections. For example, purchasers of sneaker NFTs could get exclusive access to matching physical pairs. Looking forward Accelerated innovation cycles, new and immersive product testing, decentralization of production, and interconnected digital and physical worlds are likely to define large shifts in manufacturing as the metaverse continues to take shape. It is extremely difficult to predict the specific trends that will take hold, precisely because consumers are likely to play a significant role in the design, development, and modification of products. If executed to its potential, the metaverse will democratize not only production but product offerings themselves, giving consumers the products and features they ask want. However, we must remember that the metaverse is resource intensive. While there is much to be gained in the metaverse, its potential environmental impact is significant and alarming — another very profound way in which the digital and physical worlds are inextricable. Those looking to invest in the metaverse and harness its immense promise must not do so at the expense of solving the climate crisis and other physical-world challenges. In manufacturing or any other sector, the rise of the metaverse must ultimately fit within a framework of sustainability. Katrin Zimmermann is Managing Director at TLGG , which advises global companies in the auto, retail and healthcare spaces on digital strategy, business model innovation, and organizational transformation. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,024
2,022
"Bringing digital twins to boost pharmaceutical manufacturing | VentureBeat"
"https://venturebeat.com/2022/05/26/bringing-digital-twins-to-boost-pharmaceutical-manufacturing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Bringing digital twins to boost pharmaceutical manufacturing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Digital twins — a digital replica of the factory floor — are an important part of the rapid digital transformation of traditional manufacturing known as Industry 4.0 So what about Pharma 4.0 ? Pharmaceutical manufacturers are increasingly interested in the tenets of Industry 4.0, including the use of digital twins to simulate, test and optimize manufacturing processes on a computer before using them in production, according to technology advisory firm ABI Research. It projects spending by pharmaceutical manufacturers on data analytics tools—including the digital twin — to grow by 27% over the next seven years, to reach $1.2 billion in 2030. As with other manufacturers, pharmaceutical makers plan to use the digital tools to boost productivity and to track their operations. SaaS AI Toronto-based Basetwo recently moved into this market with its software-as-a-service (SaaS) artificial intelligence (AI) platform. Today, the year-old company announced an upcoming $3.8 million seed financing round led by Glasswing Ventures and Argon Ventures. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “When you have a digital twin, you can use it to run all sorts of scenarios. Like, if I change this lever or knob, how does that impact yield efficiencies?” said Thouheed Abdul Gaffoor, the company’s cofounder and chief executive officer. “You can uncover the best ways to operate the bioreactors and chromatography columns as opposed to doing it with actual prototypes.” McKinsey notes that the type of analytics digital twins provide, in conjunction with other Industry 4.0 technologies like robotics and automation, typically boost productivity for pharmaceutical manufacturers by between 50 and 100%. Average-performing facilities could see improvements of 150 to 200%, according to the management-consulting firm. The Basetwo no-code software platform uses AI to find and learn correlations between various data points returned by the digitally simulated scenario. In the pharmaceutical industry, bioreactors are used in the industry to produce compounds and substances with the help of cells or whole organisms. These compounds are then used as finished products or undergo additional processing steps to get an isolated compound, such as vaccines or proteins. Chromatography columns separate chemical compounds. Both are vital—and expensive—pieces of equipment that can afford no downtime and that must operate efficiently for maximum cost savings. But downtime can happen when, for example, bioreactors can become contaminated through improper maintenance, failing equipment or if feeding ports aren’t sterilized long enough or at high enough temperatures. Sometimes gaskets or O-rings are missing or improperly sealed. For this reason, digital twins are also used to track maintenance and operation issues and to identify parts that are close to failing, so they can be replaced soon. For that, the digital twin depicts actual operating conditions, thanks to information continuously received from the many sensors on the equipment via the Internet of Things (IoT), Use cases spanning industries Gaffoor offered another example – a pharmaceutical company that makes biological therapies for rheumatoid arthritis or cancer treatment. The protein for such a therapy would be produced in a bioreactor. “An engineer can use our platform to pull data from the bioreactor and build a simulation model of the bioreactor and a digital twin,” he said. “Then, the engineer can do experiments like, ‘what if I change the temperature or cell culture pH?’ How will that increase the yield of that protein?” Gaffoor said. The platform can also simulate how equipment like a bioreactor will work with downstream tools such as a filtration system, he added. Though the Basetwo tool was developed for pharmaceutical companies, it can be used in adjacent industries such as chemicals, food and beverage and consumer goods, Gaffoor said. Larger companies, like Atos-Siemens, make AI-powered digital twin platforms for the pharmaceutical industry. Dassault Systèmes also makes the software. In both cases, the platforms are not concentrated specifically on the pharmaceutical industry, but can be tailored to that application. One reason for the movement of digital twins into the pharmaceutical’s space is that the number of FDA new drug approvals has steadily increased in the past two decades. Between 2000 and 2008 the agency approved 209 new drugs; that number jumped to 302 between 2009 and 2017. “There’s been so much investment in research and development, but investment [in analysis software] hasn’t caught up,” Gaffoor said. He hopes to help address this gap with Basetwo’s cost-saving analysis software. The AI platform can also aid with the labor shortages that have affected the entire manufacturing industry over the past two years. Before AI platforms such as Basetwo were available, highly skilled process engineers and operators would manually track how the equipment was functioning. They would use older software or even paper-based techniques to determine how best to improve efficiency and production. The no-code Basetwo platform, on the other hand, requires few programming skills to quickly build and deploy digital models. Basetwo will use the seed financing to hire data scientists and platform engineers to accelerate platform development, Gaffoor said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,025
2,021
"Enterprise NLP budgets are up 10% in 2021 | VentureBeat"
"https://venturebeat.com/2021/09/21/enterprise-nlp-budgets-are-up-10-in-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Enterprise NLP budgets are up 10% in 2021 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Enterprises are increasing their investments in natural language processing (NLP), the subfield of linguistics, computer science, and AI concerned with how algorithms analyze large amounts of language data. According to a new survey from John Snow Labs and Gradient Flow, 60% of tech leaders indicated that their NLP budgets grew by at least 10% compared to 2020, while a third — 33% — said that their spending climbed by more than 30%. The goal of NLP is to develop models capable of “understanding” the contents of documents to extract information as well as categorize the documents themselves. Over the past decades, NLP has become a key tool in industries like health care and financial services, where it’s used to process patents, derive insights from scientific papers, recommend news articles, and more. John Snow Labs’ and Gradient Flow’s 2021 NLP Industry Survey asked 655 technologists, about a quarter of which hold roles in technical leadership, about trends in NLP at their employers. The top four industries represented by respondents included health care (17%), technology (16%), education (15%), and financial services (7%). Fifty-four percent singled out named entity recognition (NER) as the primary use cases for NLP, while 46% cited document classification as their top use case. By contrast, in health care, entity linking and knowledge graphs (41%) were among the top use cases, followed by deidentification (39%). NER, given a block of text, determines which items in the text map to proper names (like people or places) and what the type of each such name might be (person, location, organization). Entity linking selects the entity that’s referred to in context, like a celebrity or company, while knowledge graphs comprise a collection of interlinked descriptions of entities (usually objects or concepts). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The big winners in the NLP boom are cloud service providers, which the majority of companies retain rather than develop their own in-house solutions. According to the survey, 83% of respondents said that they use cloud NLP APIs from Google Cloud, Amazon Web Services, Microsoft Azure, and IBM in addition to open source libraries. This represents a sizeable chunk of change, considering the fact that the global NLP market is expected to climb in value from $11.6 billion in 2020 to $35.1 billion by 2026. In 2019, IBM generated $303.8 million in revenue alone from its AI software platforms. NLP challenges Among the tech leaders John Snow Labs and Gradient Flow surveyed, accuracy (40%) was the most important requirement when evaluating an NLP solution, followed by production readiness (24%) and scalability (16%). But the respondents cited costs, maintenance, and data sharing as outstanding challenges. As the report’s authors point out, experienced users of NLP tools and libraries understand that they often need to tune and customize models for their specific domains and applications. “General-purpose models tend to be trained on open datasets like Wikipedia or news sources or datasets used for benchmarking specific NLP tasks. For example, an NER model trained on news and media sources is likely to perform poorly when used in specific areas of healthcare or financial services,” the report reads. But this process can become expensive. In an Anadot survey , 77% of companies with more than $2 million in cloud costs — which include API-based AI services like NLP — said they were surprised by how much they spent. As corporate investments in AI grows to $97.9 billion in 2023, according to IDC, Gartner anticipates that spending on cloud services will increase 18% this year to a total of $304.9 billion. Looking ahead, John Snow Labs and Gradient Flow expect growth in question-answering and natural language generation NLP workloads powered by large language models like OpenAI’s GPT-3 and AI21’s Jurassic-1. It’s already happening to some degree. OpenAI says that its API, through which developers can access GPT-3, is currently used in more than 300 apps by tens of thousands of developers and producing 4.5 billion words per day. The full results of the survey are scheduled to be presented at the upcoming NLP Summit , sponsored by John Snow Labs. “As we move into the next phase of NLP growth, it’s encouraging to see investments and use cases expanding, with mature organizations leading the way,” Dr. Ben Lorica, survey coauthor and external program chair at the NLP summit, said in a statement. “Coming off of the political and pandemic-driven uncertainty of last year, it’s exciting to see such progress and potential in the field that is still very much in its infancy.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,026
2,022
"Top 3 cloud-based drivers of digital transformation in 2022 | VentureBeat"
"https://venturebeat.com/2022/01/24/the-top-3-cloud-based-drivers-of-digital-transformation-in-2022"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Top 3 cloud-based drivers of digital transformation in 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Will Grannis, CTO, Google Cloud The year ahead will mark the greatest reset for corporate brands, work, and innovation in living memory. Why? Digital technology has been and continues to be the driving force behind some of the most dramatic ways companies are responding to the COVID pandemic : everything from shopping and supply chains to child care and work changed to increase safety and fight disease transmission. These new ways of working and delivering products and services will undoubtedly have profound lasting effects. For example, Target was able to quickly introduce new tech-enabled safety features and scale to support explosive growth in same-day services like drive up and pick up because of the digital architecture it has created. Target’s cloud infrastructure enables engineers to rapidly write and deploy code, and helps store team members easily access and manage information to deliver exceptional omnichannel guest experiences. In addition, governments and health professionals around the world transformed the work of hospitals, courts and schools into virtual digital experiences, empowering citizens to care for themselves and loved ones. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! All of this is possible because the cloud is fundamentally a platform on which digital information can be gathered, modeled, acted on, and made useful to people everywhere through software applications. As COVID has shown, this is happening at historic rates with sweeping positive effects. This has changed what people expect, too. At lightning speed, consumers have grown used to a daily barrage of large amounts of digital information, from sources as diverse as online shopping sites, smart doorbell cameras, personal fitness monitors or contact tracing apps. People expect the freedom and choice of remote work, thanks to cloud-based video, collaboration and data analysis tools. They work and socialize with networks of people distributed across large areas, sometimes in different countries. We don’t know what this year will look like in terms of the pandemic, but the platform-based impact of new modeling, understanding and acting is now part of our future. As the CTO of Google Cloud, I speak with top executives from a broad range of industries and, almost daily, they’ve reinforced the need of running their business on a secure, well-engineered digital platform to enable future innovation. These conversations have also shaped my view on what we can anticipate in 2022 and beyond: Consumers want visibility everywhere Consumers today operate in richer information environments than ever before, and will expect information and transparency such as a company’s sustainability practices, or how their privacy is protected. The expectation of two-way communication and the demand that the customer be heard was building before COVID. It will strengthen with time. Unilever already sees this, and besides aiming to have 1 billion personalized relationships in 190 countries, is working to end deforestation in its supply chain. Bring your values to work How people feel doesn’t change between being a consumer to being at work. They want options around hybrid work, equity and wellness. Much of the language today around the “Great Resignation,” or people leaving their jobs, is less about money and time at the office, and more about finding work with meaning that ultimately contributes to a better world. Beyond wanting to be heard at the workplace, people are curious to know how their work makes a positive contribution. For example, companies want more visibility into the carbon footprint of their technology platforms and options to reduce it, offering positive contributions that are appreciated by both staff and customers. This is in part in response to people bringing their values to work and companies responding to those values. We’ll see this increase moving forward. According to Deloitte , Gen Z is the first generation to make choices about the work they do based on personal ethics. And McKinsey says two-thirds of millennials take an employer’s social and environmental commitments into account when deciding where to work. The Golden Age of measurement Measurement may be the most powerful aspect of the cloud. Companies are becoming more and more engineering-based and data-driven. More data from new sources, better AI and analytics models, stronger computation, delivered on a fast and secure global network – it all combines to enable new models and insights to understand the world, serve customers, build a richer and more effective workforce, or improve operations. What I hear most from companies of every size is the opportunity created by seeing and measuring more parts of their world – better understanding of everything from quality control in real time, to the actual states of their supply chains, to the collective sentiment of employees around corporate policies, like remote work. Real time data now powers decision making and it will only continue to influence corporate decisions. Cloud-based platforms are forever changing basic research, product design, customer service and the structure of work. This year, we’ll see even richer examples of this. It is still early days when it comes to cloud adoption , with only 10-15% of enterprise IT spending and 20-30% of workflows moved to the cloud. I am convinced it is the strongest and most powerful technology trend of my lifetime. Based on both the rise of platform-based innovation and what I experience daily with customers, the advances in industries from retail to manufacturing to healthcare will astonish us all. Will Grannis is the founder and leader of Google’s CTO Office DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,027
2,022
"Findability hopes to make AI adoption easier for enterprises | VentureBeat"
"https://venturebeat.com/2022/02/02/findability-hopes-to-make-ai-adoption-easier-for-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Findability hopes to make AI adoption easier for enterprises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. These days, “big data ” simply isn’t enough. To provide meaningful insights and valuable analytics for optimal decision making, companies must adopt the concept of “wide data.” Whereas big data focuses on the so-called “three V’s” — volume, velocity, and value — wide data homes in on value, according to Anand Mahurkar, founder and CEO of leading enterprise AI company Findability Sciences. That is, it’s not just a mass of data for data’s sake, or data derived from a few sources. It’s tying together data from a wide range of sometimes seemingly disparate sources to allow for deeper, more purposeful analysis. “Wide data means not only the data in my organization; it’s going beyond the boundaries of my organization and combining external data, internal data, structured and unstructured data,” Mahurkar explained to VentureBeat. “If you want to know what will happen and what to do, you will need ‘wide data’ and not just ‘big data.'” The evolution of enterprise AI This is a foundational concept in enterprise AI , which involves embedding artificial intelligence methodologies into an organization’s data strategy. The software category is undergoing rapid growth as more companies across all sectors undergo digital transformation. Global Industry Analysts Inc. projects the global market for enterprise AI to reach $15.9 billion by 2026. That’s up from $1.8 billion in 2020. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Findability Sciences is working to set itself apart in a market whose dominant players include the likes of C3 AI, Abacus.ai, Microsoft , and Snowflake. Specifically, the Boston-headquartered company is lasering its focus on what Mahurkar called traditional companies — such as those in the manufacturing and retail spheres — that are still making use of legacy software products. This remains a sizable market: there are more than 60,000 companies worldwide with revenues of $200 million or more. Particularly post-pandemic, these enterprises are beginning to understand the necessity of digital transformation and AI, but they struggle with adoption and deployment, Mahurkar said. Undertaking a custom build to embed AI into existing infrastructure can be a paralyzing proposition, and outsourcing can be costly while taking an undue amount of time. Embedding AI technology To help companies tackle — and ideally master — the transition, Findability today launched its new white-label suite Findability.Inside. Quickly deployable and repeatable, it allows companies to embed AI technology into their already existing hardware and software, in turn enhancing features and functionalities and driving new insights and efficiencies. The suite makes use of advanced capabilities including computer vision, machine learning, and natural language processing to aid with predictions and forecasting, price optimization, market targeting and segmentation, sales prospecting, online advertising, and customer service. One unique feature is NLP-driven automatic summarization of video meeting recordings and industrial scale document scanning. With intelligent document processing, Mahurkar explained, embedded AI can analyze keywords and context in text-heavy documents and create automatic summaries that save valuable human reading time. It’s no secret that many external factors impact any given business, he noted. But in the past, such factors have been difficult to predict, or prepare for — which is where enterprise AI can prove so valuable. For example, say you’re a manufacturer of air conditioner units that procures your coils from China. Economists have hinted at market disruptions and fluctuations that could impact supply chain, cost, and time-to-market; these imperative details can help you pivot to make up for any gaps. Similarly, being apprised of predicted weather patterns in your top sales areas can prompt you to market more heavily there or to branch out into other areas. Mahurkar provided another example of a Silicon Valley-based digital signal processor company that used the Findability platform to track propensity-to-click patterns. In studying those, it made adjustments to online advertising and real-time bid-optimization to offer more competitive rates when it comes to cost per impression and cost per mille. For businesses dealing in physical products, Mahurkar added, enterprise AI can help by tracking and predicting inventory, supply chain issues, market conditions, and price fluctuations. “Most software will tell you what happened, they’ll tell you about the past,” such as revenues and profits, he said. “But customers are looking for the ability to know what will happen and what to do. They want leading indicators.” He described Findability.Inside as a low-code, low-cost, easy-to-use suite that can be rapidly integrated. Companies see end results in enhanced legacy products, improved customer service, bolstered revenues, and customer satisfaction and retention. They can drive digital transformation without having to develop code or invest significant human talent that is critical elsewhere. Founding Findability A first-generation immigrant and entrepreneur who came to the U.S. 20 years ago, Mahurkar established Findability in 2018 with an initial investment from SoftBank Group. In just a short time, the company has garnered high-profile customers such as IBM, Snowflake, and Red Hat, and its products have been used in conversational computing, and to analyze advertisement efficacy, assess propensity to pay, forecast apartment rentals and occupancy, and optimize supply chains. As Mahurkar explained, he set out to build a technology that connects internal, external, structured, and unstructured data to “improve a company’s ability to find information,” and allow them to realize the potential of data and apply that to business improvement. When rolling out a suite like Findability’s, he added, “Now within just a few months their products can be enabled with AI, and it’ll start telling their end customers that ‘This will happen to your business. And this is what you should do.'” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,028
2,022
"10 essential ingredients for digital twins in healthcare | VentureBeat"
"https://venturebeat.com/2022/02/21/10-essential-ingredients-for-digital-twins-in-healthcare"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 10 essential ingredients for digital twins in healthcare Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This story looks at some fundamental building blocks that work together to build a digital twin infrastructure for medicine. It explains how promising techniques like APIs, graph databases, ontologies, electronic health records are being combined to unlock digital transformation in healthcare. Digital twins could transform healthcare with a more integrated approach for capturing data, providing more timely feedback, and enabling more effective interventions. The information required to allow for better simulations lies scattered across medical records, wearables, mobile apps, and pervasive sensors. Medical digital twins can use raw digital ingredients like natural language processing (NLP), APIs, and graph databases to understand all the data and cut through the noise to summarize what is going on. Equally important, these raw ingredients can be reconstituted to craft digital twins of healthcare organizations or drug and medical devices to improve medical outcomes and reduce costs. Other industries are likely to benefit by adapting similar ingredients to similar workflows in construction, product development, and supply chain management. A living data system One of the key promises of medical digital twins is not just to fix us when we have broken down but reduce the rate at which we break down. Dan Fero, managing director of OMX Ventures, a new firm investing in digital medicine, told VentureBeat, “A digital twin should represent a living data system that can take in longitudinal biodata over time and track and learn from that evolving data set to give a reflection of a person’s health and more importantly — health trajectory. ” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This starts with measuring and tracking biodata such as cholesterol levels, vitamin panels, and medical imaging results. It will also need to include more complex datapoints, such as genomic, epigenetic, metabolomic, and immune function data. “At present, we have ‘some’ idea of the importance of these datasets in isolation, but we aren’t truly capable of linking these datasets and using that linkage to understand likely changes to future health outcomes,” Fero said. He believes the next phase lies in encoding the data to create digital twins at scale, pursued by only a handful of companies like Q Bio. “I think this is a super fascinating space and will be an evolving area for decades to come as we continue to understand how to take in new biologic data points, sift through them to understand what is important and prognostic of a health change (good or bad), correlate the massive data sets to make sense of the full operating system of life and how that can be tracked longitudinally to track health or disease and to alter long term patient outcomes,” Fero said. The ingredients for building a digital twin are still a work in progress. But the promise of doing this well is immense. Here is an overview of 10 of these essential ingredients and the role they play in creating medical digital twins: EHR The first ingredient is the system of record, which is the Electronic Health Record (EHR) in the healthcare industry. EHR systems capture the interaction with physicians, tracking medications, treatment plans, and outcomes. Leading EHR system providers include Cerner Corporation, Epic Systems, and Meditech. These systems provide a baseline for organizing static information. They also face challenges when extending beyond existing healthcare workflows or across providers. One University of Utah study found that most implementations could not catch dangerous or deadly drug combinations 33% of the time in 2018, which is a noticeable improvement from 2011, when they missed 46% of prescription errors. These EHR packages all included the ability to detect when drug combinations would be a problem. The researchers surmised the issues that arose from how each hospital customized these systems for their unique workflows. The upshot is that more work is required to improve data quality and integrate it across multiple systems. Health Data Analytics Institute CEO Nassib Chamoun told VentureBeat, “Physicians have to make dozens of important decisions on diagnosis and treatment with limited time and incomplete information. Unfortunately, with current EHRs, the quantity and displays of data are overwhelming and disjointed.” Ontology Language is a byproduct of how people describe things in different organizations and contexts. Ontologies help provide order to this chaos by standardizing the meaning of data and its links to other concepts. The medical industry has evolved across many disciplines, leading to a wealth of ontologies. The National Center for Biomedical Ontology currently lists 953 medical ontologies with thirteen million classes. “Medicine is complicated, and it does not have a complete data model,” said Dave McComb, president of Semantic Arts, a business consulting firm specializing in applying ontologies to business systems, and author of Data-Centric Revolution. Efforts are afoot to unite these disparate ontologies, including SNOMED-CT, the most exhaustive medical ontology. McComb said these efforts would also need to address the way programmers encode the structure of this data, such as its naming, validation, security, integrity, and meaning in application code. In the meantime, digital twins will rely on tools like intelligent API gateways, NLP, and real-world evidence platforms to bridge the gaps between data silos. Graph databases Graph databases are great for tying together heterogeneous data about different concepts like symptoms and diseases with medical records, test results, and diagnoses into one system. Many digital twins use cases involve weaving together many different types and sources of data to see patterns, which is one of the strengths of graph databases. Neo4J director of graph data science Alicia Frame told VentureBeat, “We see many pharmaceutical and insurance companies using graph databases to get more out of their EHRs – importing EHR data into a graph DB to better understand how relationships impact outcomes, or to identify anomalous patterns of behavior.” For example, AstraZeneca uses EHR data and graph databases to better target new to market drugs and improve patient outcomes. One large insurance company uses TigerGraph graph databases to integrate data from over two hundred sources to improve patient history visibility during call center interactions. This gives the agent an instant picture of all diagnoses, claims, prescription refills, and phone interactions. This reduced call center handling time by 10% and increased its net promoter score, reflecting customer satisfaction. But Frame has seen more limited adoption of graph databases as the database of record for EHR systems in hospitals like Epic, Cerner, and others. “I attribute this to legacy systems using older technology, and the divide between storing the data (EHRs) and making sense of the data – where we often see graph databases coming into play,” she said. Down the road, TigerGraph’s healthcare industry practice lead, Andrew Anderson, expects to see graph databases playing a larger role in building community digital twins to measure and improve population health. “Access to care, food insecurities, demographics, and financial factors can only be addressed and predicted by leveraging medical information with, and benchmarking against, the social determinants of health ,” he said. APIs Whether modeling a patient or a hospital, digital twins are created by leveraging data sources, including electronic health records (EHRs), disease registries, wearables, and more. Gautam Shah, Change Healthcare, told VentureBeat, “Regardless of model type, APIs can play an integral role in driving the effective, scalable use of digital twins to improve the healthcare cost-quality curve.” “Healthcare data sources and formats are highly fragmented in many cases,” said Shah. APIs can help smooth the subtle differences in how data is named, organized, and managed across sources. APIs can also reduce the time to gather, correlate, and prepare data to focus on creating the mechanisms that deliver the underlying value of the digital twin. Modern API platforms evolve beyond data delivery pipes to function as intelligent connections. For instance, APIs can help build digital twins for precision medicine that capture the feedback and data and deliver them back to digital twins, allowing a constant refresh and update to the digital twin model. Natural language processing Medical data often exists across various sources, which can confound efforts to form a holistic picture of a patient, much less a population. “Digital twins can improve overall care by helping with information overload. We’re generating more data than ever before, and no one has time to sort through it all,” said David Talby, CTO of John Snow Labs. For example, if a person goes to see their regular primary care physician, they will have a baseline understanding of the patient, their medical history, and medications. If the same patient goes to see a specialist, they may be asked many repetitive questions. Clinical NLP software can extract information from imaging and free-text data and serves as the connective tissue between what can be found in EHRs. For example, Roche uses NLP to build a clinical decision support product portfolio, starting with oncology. The NLP extracts clinical facts from pathology and radiology reports and marries them with other information found in unstructured free-text data to inform better clinical decision-making. Structured data often characterized details, like whether the patient had a chronic condition, was taking any medication, or had insurance. But other considerations that affect a hospital state, such as pain level, appetite, and sleep patterns, can only be found in free-text data. NLP can help connect these dots. Biosimulation Biosimulation is a computer-aided mathematical simulation of what happens when a dose of a drug is introduced to a human body. It is a large, complex model that simulates the drug’s transport, metabolism, excretion, and action over time to increase safety and efficacy. Better models promise to increase the productivity of the $200 billion spent on drug development globally. “The development of biosimulation software platforms has been transformative in drug development over the past couple of decades, and this trend is expected to continue,” Certara CEO William Feehery, PhD, told VentureBeat. The US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) have issued more than two dozen modeling & simulation-related guidance documents addressing drug-drug interactions. And the number of scientific publications that include biosimulation has tripled over the last decade. One of the most promising areas has been mechanistic biosimulation, which integrates drug and physiological information to create a mathematical modeling framework. These models are instrumental in drug development to predict various untested clinical outcomes. Companies like Certara are taking the concept further by making digital twins of individual patients, replicating each patient’s different physiological attributes that affect a drug’s impact in their body and, hence, its effects. These advances have helped better target dosing for different subpopulations of patients, such as the elderly and children. “The next step is to take the virtual twin technology into patient care and clinical decision-making to guide personalized medicine,” Feehery said. Real-world evidence Researchers often need to query data from various sources to generate insight into a particular question. RWE platforms aggregate and vet raw data to ensure it is used correctly to determine the causal relationship that can be used to make critical decisions. About 75% of all new drug approvals by the FDA in 2020 included some form of RWE. Real-world data can come from EHRs, insurance claims, product and disease registries, medical devices, and wearables. Gathering complete and high-quality data is challenging due to the large variety of data sources in interoperability limitations. Dr. Khaled El Emam, SVP & general manager of replica analytics at Aetion, said, “These platforms will increase the value of synthetic data or digital twins by enabling customers to infer the same causalities that a researcher would discover in the source data. This goes beyond observable patterns a researcher may spot in analyzing digital twins without the support of an RWE platform to create the appropriate context.” One big takeaway for other safety-critical industries is the role that RWE workflows can play in improving the management of evidence to ensure the safety of buildings, vehicles, and other things. El Emam said, “Careful consideration of criteria to assure quality and feasibility is a major component in RWE workflows – and should be applied across the entire RWE generation process, from data sources and data processing to defining appropriate use cases.” Surgical intelligence Surgical intelligence is a new concept coined by Theator to characterize tools for capturing surgical process data from the surgical theater. “The main innovation lies not only in the structuring of data and new ontologies we create but in the immediate feedback surgeons receive, as soon as they scrub out of a case,” Dr. Tamir Wolf, CEO and cofounder of Theator , told VentureBeat, It’s similar to other kinds of physical process capture tools in industries like manufacturing and logistics from companies like Drishti and Tulip Interfaces. In medicine , these tools allow surgeons to zero in on specific stages in surgical operations and capture minute details on how procedures were performed. Wolf said, “One of the first and most crucial steps in enabling hospital systems to deploy digital twins effectively will lie in their ability to collect robust high-quality data about the care being provided, connect performance to outcomes, and disseminate best practices.” Predictive analytics One promising aspect of digital twins is that they can help predict the course of a specific combination of symptoms and then assess the odds that various combinations of interventions will lead to recovery. Predictive analytics tools can collaborate with digital twins to match a patient’s digital twin to others with a similar profile. Health Data Analytics Institute CEO Nassib Chamoun said, “Advanced statistical techniques are used to determine the prospective health risk profile, and the clinician can then assess what types of treatments have worked for these types of patients in the past and make more informed decisions on care for the current patient.” Predictive analytics tools can help predict various treatment approaches’ costs and clinical outcomes. The predictive analytics work with the digital twins to generate different UI experiences to surface important insight. For example, HDAI has developed custom views for clinicians, patients, and population health managers. The clinician views are embedded into EHRs, while the population and patient views are embedded into various apps. Visualization It’s often more important to highlight salient medical details than simply display realistic ray-traced imagery in medicine. For example, better insight can help physicians improve their use of medical imaging to make essential decisions on factors such as implant size and positioning. FEops CEO Matthieu De Beule said, “This is not always straightforward, since it can often become challenging to imagine how devices will interact with different patients.” Regulatory certified medical digital twins of organs can improve surgical planning and guidance. For example, FEops has developed a regulatory cleared heart simulation to reduce procedure time and radiation exposure. Major heart valve manufacturers also use it for next-generation implant development. De Beule said his company is working with big medical imaging players like GE, Philips, and Siemens. The FEops HEARTguide product uses AI to calibrate the raw imaging data to the patient’s unique anatomy and physiology. This helps accentuate the landmarks that guide doctors during surgery for appropriate device placement. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,029
2,022
"7 ways to improve data for supply chain digital twins | VentureBeat"
"https://venturebeat.com/2022/04/15/7-ways-to-improve-data-for-supply-chain-digital-twins"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 7 ways to improve data for supply chain digital twins Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Enterprises are beginning to create digital twins of different aspects of their supply chains for simulation purposes. Various approaches to supply chain twins show tremendous value in sorting out supply chain bottlenecks, improving efficiency and meeting sustainability goals. “Digital twins can be used to create digital copies of product lines, manufacturing systems, warehouse inventory and other processes that are then analyzed – allowing supply chain managers to extract data, predict supply and demand and streamline operations,” said Kevin Beasley, CIO at Vormittag Associates Inc. , a company that offers integrated enterprise resource planning (ERP) solutions for databases. Digital copies can mirror supply chain touchpoints, helping to streamline business operations by pinpointing the exact processes taking place. By implementing digital twin technology to align with ongoing supply chain touchpoints and operations, companies can gain better insights into how to pivot and manage hiccups. But enterprises face numerous challenges in transforming raw supply chain data into living, breathing digital twins. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “As supply chains continue to build up more data than ever before, the adoption of IoT technology and predictive analytics tools to capture and process this data and drive business insights has become increasingly important to the success of digital twins,” Beasley said. Things are starting to improve. In the past, the use of digital twins was more challenging to implement as supply chain segments were more separated and data was siloed. Now, with the rise of cloud-based systems and automated supply chain management tools, digital twins are becoming increasingly useful to predict trends, manage warehouse inventory, minimize quality faults and integrate one seamless flow of data. Moving forward, Beasley expects to see the use of digital twins evolve alongside artificial intelligence (AI)-enabled modeling and IoT technology. For example, while IoT devices and sensors located throughout the supply chain have expedited the use of data to drive predictions on supply chain trends, the use of AI would make this system even more powerful. As AI-enabled models advance, manufacturers will be able to utilize data insights and create digital twin technology that can transform their ability to streamline operations, predict inventory and cut down on waste. Here are seven ways to transform raw data into actionable supply chain twins: Start with digital threads Jason Kasper, director of product marketing at product development software provider, Aras Corporation , explains that it is essential to include the digital thread when planning out a digital twin. These must work in concert for practical analysis and decision-making within the supply chain. In the context of a supply chain, he sees a digital twin as a representation of the configuration of all assets, including warehouses, manufacturing and supplier facilities, trucks, ships and planes. It also links to digital thread data such as inventory, location status and condition of assets. By developing the backbone for a digital thread, organizations can weave together meaningful relationships, connections, decisions and who made them. “Creating this complete view enables a full understanding of a specific supply chain’s status and the actions to keep it operationally efficient,” Kasper said. Move from tables to graphs Most enterprise applications capture data and put it into tables and the relationships or links between objects represented by the data are only revealed when you execute a query and join the data — and joins are computationally expensive, according to Richard Henderson, director of presales EMEA at TigerGraph. As a query grows in scope and complexity, this overhead makes queries across any reasonably sized digital twin too slow to be useful in the operational context, taking hours or even days. Businesses such as luxury vehicle manufacturer, Jaguar Land Rover , have found they can get around this problem by building their digital twin using a graph database. When Jaguar Land Rover attempted to build a model of its manufacturing supply chain using SQL, testing revealed that it would take three weeks to run one query to view their supply chain for one model of a car over six months. When they built the model in TigerGraph, the same query took 45 minutes and with further refinements, this is being brought down to seconds. A graph database approach allowed them to visualize relationships between business areas that previously existed in silos to identify critical paths, trace components and processes in greater detail than ever before and explore business scenarios in a safe, sandbox environment. Keep pace with data drift Another big challenge for digital twins is data drift, said Greg Price, CEO and cofounder at Shipwell , a cloud based TMS solution provider. Teams need to ensure the data collected for the digital twin accurately and consistently represents the true conditions of the physical twin. Additionally, having the best quality data is key to deriving full value from a digital twin. This is slowly getting better as teams move towards streaming analytics, but the practice is not yet prevalent within the industry. It is also not just the ability to have the data but the ability to understand it. Without good behavioral understanding, the interpretations run the risk of being off base, which can lead to poor decision-making. Companies need to build competency to understand how data drift can occur across the supply chain and then develop countermeasures to minimize its impact across each aspect of the supply chain, such as pricing and route management. Bridge data silos Because data is not standardized and the digital systems used to manage the supply chain, such as ERP systems or warehouse management systems (WMS), were not created to be connected or share information. Sam Lurye, CEO and founder of Kargo , a supply chain logistics and data solutions platform, explained that, “The biggest challenge in exchanging data is that it is extremely siloed across the supply chain.” New companies are emerging to solve for this problem and they do so in one of two ways: aggregating existing data or generating a new data source. Project44 is an example of a company that aggregates data from antiquated systems and makes it operational. Companies like Samsara and Kargo build their own unique data sources that create a source of truth with real-time, accurate data. The more real-time data you have, the better the digital twin. Improving 3D capture Even when supply chain twins are focused on modeling the relationships between suppliers and distributors, they can benefit from better 3D models representing products, processes and facilities. “When new items are introduced in a supply chain, as they often are in such a dynamic environment, there’s the challenge of ensuring that all components are continuously updated, as the representation must work hand-in-hand with the data to maintain the correctness of this solution,” said Ravi Kiran, CEO and founder of SmartCow , an AI engineering company. Efforts in photogrammetry are attempting to tackle the issue through automation, but the technology has to evolve before it can be used in complex supply chain applications. Include subject-matter experts It takes a concerted effort to integrate with appropriate systems to ensure a robust digital twin is configured. “The challenge to making this work well is having the required subject-matter experts step back from the daily management of the supply chain and its processes to support the configuration of the digital twin,” said Owen Keates, industry executive for Hitachi Vantara ‘s manufacturing practice. These experts understand how real-world processes integrate into the flow between ERP, supplier and third-party logistics systems, through to point-of-sale systems. “Such investment in time from supply chain specialists will ensure that not only is the digital twin a true representation of the real world, but it also gets the team deeply invested in the digital twin and expedites the adoption of the digital twin process,” he added. Leverage the cloud Cloud providers are starting to provide a staging ground for consolidating supply chain data across business apps and even across partners. For example. Google Supply Chain Twin brings together data from disparate sources while requiring less partner integration time than traditional API-based integration. “Since Google Cloud launched Supply Chain Twin, customers have seen a 95% reduction in analytics processing time, with some companies dropping from two and a half hours down to eight minutes,” said Hans Thalbauer, Google Cloud’s managing director of global supply chain, logistics and transportation. Until recently, large companies only exchanged data based on legacy technologies like EDI. A cloud-based approach can not only improve data sharing across partners, but it can also lower the bar for weaving in contextual data about weather, risk and customer sentiment to gain deeper insight into their operations. “Our vision for the supply chain is to change the world by leveraging intelligence to create a transparent and sustainable supply chain for everyone. Building an ecosystem with partners on data, applications and implementation services is a top priority to enable this vision,” Thalbauer said. Supply chain leaders are also starting to take advantage of Microsoft’s digital twin integrations. “Microsoft Azure could be a game-changer for many industries that rely on internal and extraneous data sources for their planning and scheduling,” said Yogesh Amraotkar, managing director of NTT Data’s supply chain transformation. Azure also provides tools that make it easier to combine real-time sensory data using IoT Hub with the visualization of the supply chain elements with IoT Central. Blue Yonder’s software-as-a-service solutions for the supply chain are built on the Microsoft Azure Cloud, which is growing rapidly across the globe. “Supply chain planning in the cloud, in the form of SaaS solutions, has already become the norm in the supply chain software industry,” said Puneet Saxena, corporate vice president of global manufacturing high-tech at Blue Yonder, a supply chain management provider. Linking an ecosystem of data providers still requires time and implementation effort, but once established, these automated linkages can keep operating successfully without excessive human effort and trends in this vein of technology are likely to continue. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,030
2,022
"Report: 80% of global datasphere will be unstructured by 2025 | VentureBeat"
"https://venturebeat.com/2022/05/05/report-80-of-global-datasphere-will-be-unstructured-by-2025"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 80% of global datasphere will be unstructured by 2025 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to a new report by nRoad , analysts predict the global datasphere will grow to 163 zettabytes by 2025, and about 80% of that will be unstructured. In regulated industries, such as financial services, the challenges posed by unstructured data are exponentially higher. It is estimated that two-thirds of financial data is hidden in content sources that are not readily transparent. With unstructured data growing at an unprecedented rate, financial services firms are finding it difficult to harness data and derive actionable insights. Through extensive research nRoad discovered that volume, velocity, variability and variety exacerbate the challenge. Unstructured data that lack metadata, such as field names, proliferate at increasing rates every year. However, most of an organization’s unstructured data is in the form of documents that include customer communication. And the content of documents differs so substantially — not just from domain to domain, but between specific use cases within fields. Current approaches, from Robotic Process Automation (RPA) to Natural Language Processing (NLP) models that use deep learning to produce human-like text remain unfeasibly resource-intensive and too generalized to address the totality of niche problems in the enterprise. These generic, one-size-fits-all solutions lack domain knowledge and industry-specific terminology, which diminishes their value. Even if they can successfully process 90% of a document in many real-world scenarios, a critical 10% is not correctly extracted. The landscape that emerges to tackle unstructured data will not consist of a single winner-takes-all platform. Instead, the ecosystem will be far more fragmented and specialized, with solutions providers responding to specific enterprise needs and generating business outcomes based on their demonstrated abilities to solve a handful of challenges relating to unstructured data rather than their abilities to solve all of them. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! First and foremost, reliable unstructured data processing for enterprises requires incorporating domain knowledge as more than a mere adjunct to a larger platform. Instead, it is an inextricable component of any foundation for extracting and summarizing documents. Financial services firms cannot leave behind 85% of their data. With the approach outlined here, they have an opportunity to incorporate valuable information and insights from unstructured sources into mission-critical business flows. Read the full report by nRoad. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,031
2,022
"What do graph database benchmarks mean for enterprises? | VentureBeat"
"https://venturebeat.com/2022/05/11/what-do-graph-database-benchmarks-mean-for-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What do graph database benchmarks mean for enterprises? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Graph databases are playing a growing role in improving fraud detection, recommendation engines, lead prioritization, digital twins and old-fashioned analytics. But they suffer performance, scalability and reliability issues compared to traditional databases. Emerging graph database benchmarks are already helping to overcome these hurdles. For example, TigerGraph recently used these benchmarks to scale its database to support 30 terabytes ( TB) of graph data, up from 1 TB in 2019 and 5 TB in 2020. David Ronald, director of product marketing at TigerGraph, told VentureBeat that TigerGraph uses the LDBC benchmarks to check its engine performance and storage footprint after each release. If it sees a degradation, the results help it figure out where to look for problems. The TigerGraph team also collaborates with hardware vendors to run benchmarks on their hardware. This is important, particularly as enterprises look for ways to operationalize the data currently tucked away across databases, data warehouses and data lakes that represent entities called vertices and the connections between them called edges. “With the ongoing digital transformation, more and more enterprises have hundreds of billions of vertices and hundreds of billions of edges,” Ronald said. Dawn of graph benchmarks The European Union tasked researchers with forming the Linked Data Benchmark Council (LDBC) to evaluate graph databases’ performance for essential tasks to address these limitations. These benchmarks help graph database vendors identify weaknesses in their current architectures, identify problems in how they implement queries and scale to solve common business problems. They can also help enterprises vet the performance of databases in a way that is relevant to common business problems they want to address. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Peter Boncz, professor at Vrije Universiteit and founder of the LDBC, told VentureBeat these benchmarks help systems achieve and maintain performance. LDBC members include leading graph database vendors like TigerGraph, Neo4J, Oracle, AWS and Ant Group. These companies use the benchmarks continuously as an internal test for their systems. The benchmarks also point to difficult areas, like path finding in graphs, pattern matching in graphs, join ordering and query optimization. “To do well on these benchmarks, systems need to adopt at least state of the art in these areas if not extend state of the art,” Boncz said. Boncz has also seen various other benefits arise from LDBC cooperation. For example, LDBC collaboration has helped drive standardization of the graph data model and query languages. This standardization helps ease the definition of benchmarks and is valuable to users and accelerates the field’s maturity. LDBC members also venture beyond benchmarking to start task forces in graph schema languages and graph query languages. The LBDC has also begun collaborating with the ISO working group for the SQL standard. As a result of these efforts, Boncz expects the updated SQL:2023 standards to include graph query functionality (SQL/PGQ – Property Graph Query) and the release of an entirely new standard graph query language called GQL. Types of benchmarks The LDBC has developed three types of benchmarks for various use cases: The Social Networking Benchmark (SNB) suite is the most directly applicable to common enterprise use cases. It targets common graph database management systems and supports both interactive and business intelligence workloads. It mimics the kinds of analytics enterprises might do with fraud detection, product recommendations, and lead generation algorithms. The largest SNB dataset at Scale Factor 30k, involves processing 36 TB of data with 72.6 billion vertices and 533.5 billion edges. The Graphalytics benchmark is an industrial-grade benchmark for graph analysis. This benchmark can test datasets with up to 100 million vertices and 9.4 billion edges. These are good for measuring classic graph algorithms such as page rank and community detection. The machine learning and AI community are adopting it to improve model accuracy. The Semantic Publishing Benchmark uses an older web data schema called RDF. It is based on a use case from the BBC, an early adopter of RDF. “Most graph system growth has been around the property graph data model, not RDF,” Boncz said. As a result, the Social SNB aimed at property graph data has received considerably more attention. Plan for real-world use cases Graph databases are a great tool for helping vendors to improve their tools and for enterprises to assess the veracity of vendor claims using an apples-to-apples comparison. “But raw performance doesn’t tell the whole story of any technology, particularly in the granular world of graph databases,” said Greg Seaton, VP of Product at Fluree, a blockchain graph database. For example, small to medium enterprises may not need to regularly process millions of graph structures, called triples, every second. They may see greater benefit from advanced value add features like transaction blockchains, level-2 off-chain storage, non-repudiation of data, interoperability, standards support, provenance and time-travel query capabilities, which require more processing than just straight graph, relational or other NoSQL stores. As long as the performance of the graph storage platform is right sized for the enterprise, and the capabilities also fit the needs of that enterprise, performance past a certain point, although nice to have, is not as crucial as that fit. Seaton said, “Not every graph database has to be a Formula One race car. There are many industry needs and domain use cases that are better served by trucks and panel vans with the features and functionality to support necessary enterprise operations.” Prepping for graph data Machine learning and database benchmarks have played a tremendous role in shaping those tools. Graph database experts hope that better benchmarks could play a similar role in the evolution of graph databases. Ronald sees a need for more graph database benchmarks in verticals. For example, there are many interesting query patterns in the financial sector that the LDBC-SNB benchmark has not captured. “We hope there will be more benchmark studies in the future, as this will result in greater awareness of the relative merits of different graph databases and accelerated adoption of graph technology,” he said. Boncz wants to see more audited benchmark results for the existing Social Network Benchmark. The LDBC has shown interesting results for the Interactive Workload benchmark. The LDBC is now finishing a second benchmark for Business Intelligence Workloads. Boncz suggested interested parties check out the upcoming LDBC Technical User Community meeting coinciding with the ACM SIGMOD 2022 conference in Philadelphia. “These events are perfect places to provide feedback on the benchmarks and learn about the new trends,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,032
2,022
"Emerging digital twins standards promote interoperability | VentureBeat"
"https://venturebeat.com/2022/05/27/emerging-digital-twins-standards-promote-interoperability"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Emerging digital twins standards promote interoperability Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Digital twins have the potential to transform the way products are designed, built and operated to improve sustainability and profitability. But most digital twin projects to date have focused on a specific use case. Emerging digital twins standards promise to help connect the dots between individual digital twins to enable systems of systems. Multiple standards are being developed by the International Standards Organization (ISO), the Industrial Digital Twins Association (IDTA) and the Digital Twins Consortium (DTC) — creating some challenges for unifying digital twins into systems. At the Digital Twin Summit , experts from each organization weighed in on where these standards are today and what is ahead. Irene Petrick, senior director of industrial innovation at Intel, said people tend to think about digital twins as monolithic constructs. “Instead, we really need to be thinking about digital twins as being anchored in a point of view,” Petrick said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! System of digital twin systems What a digital twin needs and how it adds value are perceived differently when considered at the level of a machine, a factory production cell, engineering and design or a leadership level, Petrick explained. A broader adoption of standards could support the interoperability required to enable this system-of-systems approach. Sameer Kher, senior director of product development at Ansys, who also sits on the steering committee for the DTC, pointed out that a manufacturing facility generally consists of various kinds of equipment. Each of these elements is, in turn, composed of engineering systems such as robot arms, motor drives and software. Standards are essential for all these things to work together in the real-world and virtual world. The DTC approaches this problem by developing a collection of open-source software to implement best practices and support open-source collaboration. Enabling different types of questions As one example, Boeing has been a big proponent of digital twin standards and has been working with various groups to evolve standards from the physical world to the digital. Kenneth Swope, senior manager of enterprise interoperability standards at Boeing, said that, “Historically, these [standards] all used to be with respect to things like processes and properties and characteristics of physical things. And they continue to be so, but in addition, they are now about the data as well.” His team is looking at improving the utilization and exchange of data across groups involved in engineering, manufacturing, supply chain, service and support. Teams have been exchanging data for a while. “But the really neat thing is about digital twins is the opportunity to ask different types of questions, to really solve age-old business problems of how can I make my product safer, how can I make my product with higher quality and how can I make my product more efficient,” Swope said. Boeing worked on the ISO-23247 standard to show how digital twins could help optimize the design and assembly of fasteners in a jet wing. Using digital twins in concert with instrumentation in the manufacturing process helped identify opportunities to reduce the size and number of fasteners, shaving hundreds of pounds off the weight of a wing compared to a traditional approach. Digital threads promise to connect the digital twin dots Gordon (Guodong) Shao is a computer scientist in the lifecycle engineering group at the National Institute for Standards and Technology ( NIST ). He has been helping coordinate efforts on the ISO-23247 standard and developing best practices as part of a NIST manufacturing testbed. He also wrote up a deep dive on various use cases of the technology. Shao observed that digital twins require the interplay of components for data collection, processing, communication, modeling, analytics, simulation and control. Some of these can be distributed, so there is a significant challenge to seamlessly integrate these various pieces because of that. “We need to take a system-of-systems approach to characterize and manage these subsystems to ensure cross-disciplinary interoperability and maintain the credibility of digital twins,” Shao said. ISO-23247 provides a generic digital twin development framework to help manufacturers choose building blocks for digital twin implementations. It can help them analyze digital twin project requirements and use common terminology when communicating with suppliers, partners and customers. A vital feature of this new standard is support for digital threads that connect the dots between data from different parts of a product’s life cycle. Digital threads will help optimize processes at the enterprise level that are greater than the local optimization available with siloed digital twins. A lifecycle approach Digital twins today are mostly application-driven. “But what we really need is the interoperable digital twin so we can realize the interoperability between these different digital twins,” said Christian Mosch, general manager at IDTA. The IDTA Asset Administration Shell standard provides a framework for sharing data across the different lifecycle phases such as planning, development, construction, commissioning, operation and recycling at the end of life. It provides a way of thinking about assets such as a robot arm and the administration of the different data and documents that describe it across various lifecycle phases. The shell provides a container for consistently storing different types of information and documentation. For example, the robot arm might include engineering data such as 3D geometry drawings, design properties and simulation results. It may also include documentation such as declarations of conformity and proof certifications. The Asset Administration Shell also brings data from operations technology used to manage equipment on the shop floor into the IT realm to represent data across the lifecycle. For example, the robot arm generates a stream of operations data once it hits the shop floor that is gathered using standards like OPC UA. Teams may also create processes that use this robot using specifications written in automation ML. Digital twin standards are still young. The ISO-23247 standard was only finalized last October. But panelists expect widespread adoption could play a crucial role in digital transformation for physical industries. “If you think about standards, both from the systems of systems approach and the lifecycle approach and use the digital thread, you can save a lot of effort on the information exchange, reuse the information and avoid customized effort,” Shao said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,033
2,017
"AI Weekly: Musk and Zuck are missing the point | VentureBeat"
"https://venturebeat.com/2017/07/27/ai-weekly-musk-and-zuck-are-missing-the-point"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Musk and Zuck are missing the point Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Tesla CEO Elon Musk and Facebook CEO Mark Zuckerberg had a little tit-for-tat this week over artificial intelligence. Does it represent an existential threat to humanity , as Musk argues, or does it hold great promise to improve our lives, as Zuckerberg believes? On a Facebook Live broadcast , Zuckerberg declared, “I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative, and in some ways I actually think it is pretty irresponsible.” Musk took the kerfuffle to Twitter , replying: “I’ve talked to Mark about this. His understanding of the subject is limited.” Ouch. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A few hours after Musk tweeted his barb, a voice of reason took the stage at a Harvard Business Review event in San Francisco. Andrew Ng , the cofounder of Coursera and former chief scientist at Chinese technology powerhouse Baidu, said, “As an AI insider, having built and shipped a lot of AI products, I don’t see a clear path for AI to surpass human-level intelligence. I think that job displacement is a huge problem, and the one that I wish we could focus on, rather than be distracted by these science fiction-ish, dystopian elements.” Ng went on to warn, “As a society, we’re over-investing our attention on evil AI and under-investing on job disruption and the underlying educational changes that need to happen.” And he’s right. The asteroid bearing down on us is not a dystopian future with robot overlords, but the tens of thousands of workers who will lose their jobs to AI. “A lot of people doing the jobs that are about to go away — they don’t understand AI, they don’t have the training to understand AI. And so a lot of people whose jobs are going to go away don’t know that they’re in the crosshairs.” And that’s where we should be investing our attention. Please send newsletter feedback and guest post submissions to John Brandon ; email news tips to Blair Hanley Frank and Khari Johnson — and be sure to bookmark our AI Channel. Thanks for reading, Blaise Zerega Editor in Chief P.S. Please enjoy this video from MB 2017 , “ The New York Times, AI and long-term engagement. ” From the AI Channel Aiqudo raises $5.2 million for voice commands that control smartphone apps Aiqudo today announced it has raised a $5.2 million funding round. The startup uses voice assistants like Amazon’s Alexa to control smartphone apps with pre-made or customizable voice commands. Aiqudo is currently available in beta in the Google Play Android store and is being reviewed for consideration in the iOS App Store. The funding will be […] Read the full story AI expert: Worry more about jobs than killer robots While there has been a lot of talk about super-smart artificial intelligence lately, one of the leaders in the field thinks there are more pressing problems for humanity to solve. Andrew Ng, the cofounder of Coursera and former chief scientist at Chinese technology powerhouse Baidu, told an audience at a Harvard Business Review event today […] Read the full story Khosla Ventures leads $50 million investment in Vicarious’ AI tech Vicarious, which is working on narrowing the gap between human and artificial intelligence, announced today that it has raised $50 million in a round led by Khosla Ventures. The Union City, California-based startup is using computational neuroscience to build better machine learning models that help robots quickly address a wide variety of tasks. Vicarious focuses on […] Read the full story 14 ways AI will impact the education sector There have been a lot of digital “next big things” in education over the years — everything from the Apple IIe to online learning. The latest is artificial intelligence education tech (AI ed), and only time will tell what impact it ultimately has. But for something as important as education, now is the time to […] Read the full story What AI-enhanced health care could look like in 5 years This summer, KPCB partner Mary Meeker’s 2017 Internet Trends report singled out healthcare as a sector ripe with opportunity. The report proposed that the healthcare market, driven by a number of converging technologies, is approaching a “digital inflection point” and is currently positioned for rapid growth. This is an understatement. Due to increasing digitization of […] Read the full story Intel’s new hardware puts AI computation on a USB stick Hoping to lower the bar to entry for those making artificial intelligence apps, Intel launched the Movidius Neural Compute Stick, the world’s first USB-based deep learning inference kit and self-contained AI accelerator. The compute stick, which is akin to other Intel PC-on-a-USB products, can deliver deep learning neural network processing capabilities to a wide range of host devices […] Read the full story Beyond VB A computer was asked to predict which start-ups would be successful. The results were astonishing In 2009, Ira Sager of Businessweek magazine set a challenge for Quid AI’s CEO Bob Goodson: programme a computer to pick 50 unheard of companies that are set to rock the world. (via World Economic Forum) Read the full story AI May Soon Replace Even the Most Elite Consultants Amazon’s Alexa just got a new job. In addition to her other 15,000 skills like playing music and telling knock-knock jokes, she can now also answer economic questions for clients of the Swiss global financial services company, UBS Group AG. (via Harvard Business Review) Read the full story Musk and Zuckerberg clash over future of AI It comes after the Facebook boss said that the doomsday scenario put forward by Mr Musk was unhelpful. Mr Musk tweeted: “I’ve talked to Mark about this. His understanding of the subject is limited.” The pair represent two distinct groups, those saying AI’s benefits will outweigh its negatives and those saying it could ultimately destroy humanity. (via BBC News) Read the full story Beijing Wants A.I. to Be Made in China by 2030 If Beijing has its way, the future of artificial intelligence will be made in China. The country laid out a development plan on Thursday to become the world leader in A.I. by 2030, aiming to surpass its rivals technologically and build a domestic industry worth almost $150 billion. (via New York Times) Read the full story Subscribe to AI Weekly and receive this newsletter every Thursday VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,034
2,017
"AI Weekly: Elon Musk is worried about killer bots again. Or is he? | VentureBeat"
"https://venturebeat.com/2017/08/24/ai-weekly-elon-musk-is-worried-about-killer-bots-again-or-is-he"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Elon Musk is worried about killer bots again. Or is he? Share on Facebook Share on X Share on LinkedIn Tesla CEO Elon Musk Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Killer robots might be coming for us, but not if Elon Musk has his way. In a letter to the United Nations, Musk championed the cause for 116 entrepreneurs and AI experts to set guidelines for future robots that can make decisions about killing humans. Of course, everyone went into hysterics. The “killer robots” phrase somehow became the norm for many headlines, even though Musk was actually hoping to form a committee (called the Group of Governmental Experts on Lethal Autonomous Weapon Systems). At the same time, the letter does sound ominous: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Musk seems to be on a war of his own — hoping to create a roadmap for future robotic innovations. As usual, I have a theory about why that is. I think he’s trying to get us to think about AI dangers and, at the same time, realize that AI developments are escalating quickly. He’s not saying the sky is falling. He’s saying there is major progress and that’s a good thing, but with progress there are also many ramifications. I happen to agree with him about the need for diligence. To me, it’s too early for regulations — we’ve seen that already with drone licensing where you had to write a code on the side of your DJI and pay a $5 fee. (The ruling is no longer in effect.) Drones are not filling the sky just yet, but at some point, licensing will make sense. At the very least, it opens up a conversation about AI, and that’s always a good thing. Thanks for reading, John Brandon VentureBeat Editor From the AI Channel Augmented reality’s future isn’t glasses. It’s the car In the next seven years, true augmented reality will likely not become mainstream anywhere, except the automotive industry — and here are the reasons why. By “true augmented reality” we mean AR that shows virtual objects to be actually integrated with the real environment and visible on various depths, not only on a screen’s surface. […] Read the full story Samsung confirms plans to launch a smart speaker Samsung is working on a smart speaker to take on Amazon’s Alexa-enabled devices, Google Home, and soon Apple’s HomePod. The existence of a Samsung smart speaker and plans to release a speaker soon were shared with CNBC today by DJ Koh, president of the mobile division at Samsung. Koh did not share details about the […] Read the full story SoftBank’s Pepper robot programed to perform Buddhist funeral rites in Japan (Reuters) — A Japanese company has introduced a new role for SoftBank’s humanoid robot “Pepper” — a Buddhist priest for hire at funerals. Chanting sutras in a computerized voice while tapping a drum, the robot was on display on Wednesday at a funeral industry fair — the Life Ending Industry Expo — in Tokyo. Nissei […] Read the full story ‘World’s first’ fully autonomous drone delivery service kicks off in Iceland Drones represent the future of consumer delivery services, if recent developments are anything to go by. Amazon, 7-Eleven, DoorDash, and the like have all dabbled with unmanned vehicles, both in the skies and on the sidewalks. But one Icelandic company is claiming a first in terms of a permanent, fully autonomous commercial drone delivery service that doesn’t have […] Read the full story Apple may be building a self-driving shuttle for employees, with the unfortunate name PAIL Dreams of an autonomous Apple car may be dead, but it seems the company is at least trying to resurrect some of its self-driving tech. According to a report in the New York Times, Apple is working on a self-driving shuttle to cart its employees around its sprawling network of offices in the Cupertino area. While […] Read the full story Microsoft unveils Brainwave, a system for running super-fast AI Microsoft made a splash in the world of dedicated AI hardware today when it unveiled a new system for doing high-speed, low-latency serving of machine learning models. The company showed off a new system called Brainwave that will allow developers to deploy machine learning models onto programmable silicon and achieve high performance beyond what they’d […] Read the full story Leap.ai launches job matching platform after raising $2.4 million Leap.ai, which uses artificial intelligence to match job seekers with employers, today launched with $2.4 million in seed money. Participants included ZhenFund, established in collaboration with Sequoia Capital China, as well as several angel investors. Users sign up on Leap.ai’s website or iOS app to create a profile and submit a self-assessment, detailing their strengths, […] Read the full story Databricks raises $140 million to accelerate AI in the enterprise Databricks, which provides software to help fuse big data and artificial intelligence, announced today that it has secured an additional $140 million in funding in a round led by Andreessen Horowitz, with participation from New Enterprise Associates (NEA). New investors include Battery Ventures, Future Fund Investment, A.Capital Partners, Geodesic Capital, and Green Bay Ventures. “Only one […] Read the full story Beyond VB Elon Musk leads 116 experts calling for outright ban of killer robots Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and use of killer robots. Tesla’s Elon Musk and Alphabet’s Mustafa Suleyman are leading a group of 116 specialists from across 26 countries who are calling for the ban on autonomous weapons. (via The Guardian) Read the full story The World’s First Album Composed and Produced by an AI Has Been Unveiled “Break Free” is the first song released in a new album by Taryn Southern. The song, indeed, the entire album, features an artist known as Amper — but what looks like a typical collaboration between artists is actually much more than that. Taryn is no stranger to the music and entertainment industry. She is a singer and digital storyteller who has amassed more than 500 million views on YouTube, and she has over 450 thousand subscribers. (via Futurism) Read the full story Google-made algorithm automatically removes watermarks from stock photo Stock photo distributors must be squirming in fear. Researchers from Google have developed an algorithm that completely removes watermarks from images in a matter of seconds — and it works entirely automatically. Photography professionals will often slap watermarks on their images to protect their copyrights and prevent people from using them without their permission. However, the researchers were able to identify a glaring error in this approach and exploit it to negate the visibility of watermarks altogether. (via TNW) Read the full story Microsoft researchers achieve new conversational speech recognition milestone Last year, Microsoft’s speech and dialog research group announced a milestone in reaching human parity on the Switchboard conversational speech recognition task, meaning we had created technology that recognized words in a conversation as well as professional human transcribers. (via Microsoft) Read the full story VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,035
2,022
"AI Weekly: Digging into DALL-E 2 for the enterprise | VentureBeat"
"https://venturebeat.com/2022/04/22/ai-weekly-digging-into-dall-e-2-for-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Digging into DALL-E 2 for the enterprise Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Welcome to the latest edition of AI Weekly As I finish my first week at VentureBeat, it’s a perfect opportunity to introduce myself: I’m Sharon Goldman, a senior editor and writer covering AI for technology decision-makers. Based in central New Jersey (exit 10), I’ve reported on business-to-business (B2B) technology for over a decade, writing for publications including CIO, Forbes.com, Insider, Shopper Marketing, Adweek and CMSWire. I chose a busy AI news cycle to get on board at VentureBeat. Certainly, the debut of DALL-E 2 , OpenAI’s new AI model, which uses advanced deep-learning techniques to generate and edit photorealistic images simply by comprehending text instructions, has been the subject of chatter for two weeks now. That includes both rhapsodic responses around DALL-E 2’s capability to create amazing photos of avocado-shaped teapots and chairs, as well as loud concerns about possible digital fakes image generation and the spread of misinformation. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As Ben Dickson explained here , “DALL-E 2 is a ‘generative model,’ a special branch of machine learning that creates complex output instead of performing prediction or classification tasks on input data. You provide DALL-E 2 with a text description, and it generates an image that fits the description.” What sets DALL-E 2 apart from other generative models, he continued, is “its capability to maintain semantic consistency in the images it creates.” I wanted to know what this all means for enterprise business, so I reached out for comments from a couple of experts: Finally, in a VentureBeat column this week, Sahor Mor, a product manager at Stripe, explored how DALL-E 2’s powerful text-to-image model might be used to generate datasets to solve computer vision’s biggest challenges. “Computer vision AI applications can vary from detecting benign tumors in CT scans to enabling self-driving cars, yet what is common to all is the need for abundant data,” Mor wrote. “DALL-E 2 is yet another exciting research result from OpenAI that opens the door to new kinds of applications. Generating huge datasets to address one of computer vision’s biggest bottlenecks – data – is just one example.” Some experts, however, maintain there is the danger of over-hyping DALL-E 2. “It’s important not to conflate the ability to generate realistic images from text with “understanding,” Peter Stone, president, founder and director of the Learning Agents Research Group (LARG) within the AI Laboratory in the department of computer science at the University of Texas at Austin, told VentureBeat. “I do not think of DALLE-2 as making significant advances (beyond existing models) towards the long-term goals of many people in the field of AI – it does not give me any more confidence than I had before that all of AI can be solved with neural networks alone.” In Case You Missed It AIs future is packed with promise and potential pitfalls Why it’s a must-read: Solving the inherent problems of foundation models requires real-world use. 7 ways to improve data for supply chain digital twins Why it’s a must-read: Various approaches to supply chain twins show tremendous value in sorting out supply chain bottlenecks, improving efficiency and meeting sustainability goals. The success of AI lies in the infrastructure Why it’s a must-read: AI is all about data, and data lives in infrastructure. The only way to ensure that AI’s promises can be turned into reality is to create the right physical underpinnings to allow the intelligence to work its magic. Can human-centered MLops help AI live up to hype? Why it’s a must-read: Human-centered AI is more than a hyped buzzword or philosophical framework. While it focuses on how AI can amplify and enhance human performance, it is really about helping enterprises build and manage better AI. Thanks for reading, Sharon Twitter: @sharongoldman VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,036
2,021
"Only 4% of supply chain leaders are 'future-ready,' Accenture says | VentureBeat"
"https://venturebeat.com/2021/06/27/only-4-percent-of-supply-chain-leaders-are-future-ready-accenture-says"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Only 4% of supply chain leaders are ‘future-ready,’ Accenture says Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The pandemic accelerated digital transformation , but that doesn’t mean change comes easy. According to new research from consulting company Accenture , 81% of supply chain leaders say the pandemic has been their organization’s greatest stress test and that they’re facing technological change at “unprecedented speed and scale.” What’s more, only 4% say they’re “future-ready,” while 34% expect to be there by 2023. The research also underlines the need for enterprises to get up to speed — “future-ready” organizations were found to be twice as efficient and three times more profitable than peers, according to Accenture. “The pandemic exposed just how much the supply chain can make or break a company’s success,” the report states. “It has revealed hidden vulnerabilities. And in the process, the crisis has moved Chief Supply Chain Officers (CSCOs) to the forefront of change. The days when their sole focus was on cost management are gone.” Accenture uses the identifier “future-ready” to indicate its highest level of operational maturity, compared to “stable,” “efficient,” and “predictive.” The 4% of supply chain leaders that fall into this category, the firm says, have broken down siloes and enabled real-time visibility across the value chain. They’ve also transformed ways of working and reskilled their workforces to keep pace and adapt to change. For this research, Accenture worked with Oxford Economics to survey 1,100 executives globally, 44% of whom were C-level or equivalent. The participants spanned 13 industries and 11 countries, and Oxford Economics additionally conducted 12 in-depth, off-the-record interviews with executives. The challenges of evolution Overall, supply chain leaders cited lack of cohesive strategy and technology as their biggest challenge. Many supply chain functions are still constrained by aging legacy technology and are working in a patchwork of digital and non-digital systems. This prevents enterprises from reaping the rewards data-driven insights could deliver, including the ability to predict and monitor every action along the supply chain. Such insights can also be used to reinvent how companies source, plan, manufacture, distribute, and recycle products, but data and integration are key. “While the supply chain was once a linear flow of goods and services, today it exists as highly integrated networks of hundreds or thousands of suppliers,” Manish Sharma, group CEO of Accenture Operations, told VentureBeat. He says the early days of the pandemic only exacerbated this trend, as lockdowns triggered widespread supply chain disruptions. “The notion of real-time visibility [took] on new meaning and urgency for businesses.” Despite these challenges, the research found supply chain leaders are fairly confident in their organization’s ability to widely use data, automation, AI, and other characteristics of future-readiness. Additionally, most CCSOs surveyed said their organization’s operations maturity has improved, and they’re optimistic about more progress in the next three years. The process of transformation To spark digital transformation, you must know the ultimate goal, know the steps, and know how to leapfrog maturity levels, the report summarizes. It argues supply chain leaders can “drive value at the ‘seams'” and start bridging silos by thinking holistically about strategy and technology. Sharma gets even more specific, saying enterprises should look to scale automation to augment human talent and commit to making insight-driven decisions with better data and AI. During the pandemic, for example, he said Accenture helped Halliburton, which services the energy sector, with a new delivery platform that applies advanced analytics and enhanced business intelligence tools for its support teams. As part of the VentilatorChallengeUK Consortium , Accenture also helped Rolls-Royce coordinate production of ventilators urgently needed by the UK’s health service. Sharma also recommends enterprises scale cloud investments to support digital supply chains and build ecosystem relationships. The pandemic actually helped accelerate the latter; 39% or the research participants said the pandemic pushed their organizations to focus more on partner relationships. “Future-readiness starts with breaking down silos and executing an integrated and centralized supply chain strategy that brings all parts of the value chain together around a shared set of business and customer outcomes,” Sharma said. “Working with data-driven insights will help leaders predict and monitor every action along the supply chain and reinvent how they source, plan, manufacture, and distribute products.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,037
2,021
"AMD boosts performance of datacenters, technical computing, HPC, and AI | VentureBeat"
"https://venturebeat.com/2021/11/08/amd-boosts-performance-of-datacenters-technical-computing-hpc-and-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AMD boosts performance of datacenters, technical computing, HPC, and AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Santa Clara, California-based company, Advanced Micro Devices (AMD), says its “Milan-X” AMD EPYC processors with 3D V-Cache, to launch in early 2022, will deliver a “50% average uplift” to technical computing workloads. It also said its Instinct MI200 GPUs, to also launch in early 2022, will boost high-performance computing (HPC) and AI. AMD made the announcements today at its Accelerated Data Center Premier virtual event. HPC is one area where AMD has bragging rights, given that its designs were chosen for Oak Ridge National Laboratory’s Frontier supercomputer , one of the first exascale systems capable of exceeding a quintillion, or 10 18 , calculations per second. Frontier pairs Cray’s new Shasta architecture and Slingshot interconnect with AMD EPYC and Instinct processors assembled with 4 GPUs to 1 CPU in each node, according to the project website. Currently, under construction, Frontier is scheduled to be available to scientists in early next year. “We’re bringing the CPUs, GPUs, and software together into a unified system architecture to power exascale computing,” Ram Peddibhotla, AMD corporate vice president, product management, said in a preview briefing for journalists. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While few businesses today aspire to exabyte performance, those with technical computing workloads like electronics design, structural analysis, computational fluid dynamics, and finite element analysis techniques used in engineering simulations will benefit from improvements to EPYC, according to AMD. For example, EPYC shows a 66% performance improvement for RTL verification, a critical process in electronic design automation. “Verification proves that each structure and the design does what it’s supposed to do,” Peddibhotla explained. “It helps catch defects early in the process before a chip is baked into silicon.” Designers taking advantage of this improvement will get the choice of finishing verification faster and getting to market faster or packing more tests into the same amount of time to improve quality, he said. Above: Frontier supercomputer AMD says EPYC benefits from continued improvements in its 3D chiplet manufacturing process and boosting the amount of L3 cache per complex core (CCD) from 32 to 96 megabytes. In an 8-CCD module that includes other types of cache, the total is “804 megabytes of cache per socket at the top of the stack — an incredible amount of cache,” Peddibhotla said. That means the processor can manage more information internally, without relying on other server memory or storage. AMD says its latest GPU for datacenters will perform 9.5 times faster for high-performance computing (HPC) and 1.2 times faster for AI workloads than competing GPUs — like those from Nvidia. The Instinct MI200 is the latest in a line of GPUs specifically designed for datacenters , as opposed to gaming and desktop graphics. For this update, AMD particularly focused on improving performance for double-precision floating-point operations, which is why the performance improvements claimed are bigger for HPC than for AI processing. “We targeted this device to do really, really well on the toughest scientific problems requiring double-precision math, and that’s where we made the biggest step forward,” said Brad McCreadie, corporate VP of datacenter GPU accelerators at AMD. The performance improvement varies between types of HPC workloads, for example, McCreadie said the Instinct MI200 performs 2.5 times faster for the types of vector operations used for vaccine simulations. More targeted toward AI developers is the release of the ROCm 5.0 open source software for GPU computing, which integrates with popular frameworks such as Pytorch and TensorFlow, and the launch of the Infinity Hub collection of code and templated containers to help developers get started. AMD also announced the third generation of its Infinity architecture for interconnecting CPUs and GPUs, which it says can deliver up to 80Gbps of total aggregate bandwidth to reduce data movement and simplify memory management. Despite fierce competition with Nvidia, Intel, and others, AMD reported 55% revenue growth in the most recent quarter. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,038
2,020
"Arm unveils 2 new AI edge computing chips | VentureBeat"
"https://venturebeat.com/2020/02/10/arm-unveils-2-new-ai-edge-computing-chips"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Arm unveils 2 new AI edge computing chips Share on Facebook Share on X Share on LinkedIn Simon Segars expects Arm's partners to ship 50 billion chips in the next two years. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Semiconductor and software design company Arm is doubling down on edge AI hardware, a market that’s expected to be worth $1.15 billion by 2023. It today announced two new AI-capable processors — the Arm Cortex-M55 and Ethos-U55, a neural processing unit (NPU) — designed for internet of things (IoT) endpoint devices, alongside supporting software libraries, toolchains, and models. The company claims that the two chips, which are expected to arrive in market in early 2021, together will deliver an uplift of up to 480 times in machine learning performance in certain voice and vision scenarios. “[Machine learning] processing on low-power endpoint devices is critical to realizing the full potential of AI for IoT,” wrote the company in press materials. “An extended range of advanced hardware capabilities is required to enable innovation and scale.” The Cortex-M55, the newest member of Arm’s Cortex-M processor portfolio of cost-optimized and power-efficient microcontroller devices, delivers up to a 15 times uplift in AI performance on its own compared with previous Cortex-M generations, as well as custom instructions and configuration options. Like the Ethos-U55, it’s available in a reference design — Corstone-300 — that ships with a number of secure subsystems and a toolkit with which to build secure embedded systems. The Cortex-M55 also has the distinction of being the first system-on-chip based on Arm’s Helium technology, an extension of the Armv8.1-M architecture that is optimized for low-power silicon and adds over 150 new scalar and vector instructions. The integer Helium enables efficient compute of 8-bit, 16-bit, and 32-bit fixed-point data, the last two of which are widely used in traditional signal processing applications such as audio processing. As for the 8-bit fixed-point format, it’s common in machine learning processing such as neural network computation and image processing, complementing floating point data types including single-precision floats (32-bit) and half-precision floats (16-bit). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Arm, Helium enables a performance boost of up to 5 times for digital signal processing and 15 times for machine learning with the Cortex-M55. It additionally allows for advanced memory interfaces to provide speedy access to machine learning data, and it builds in Arm’s TrustZone systemwide embedded security technology. Ethos-U55 — which is intended to be paired with a Cortex-M processor like the Cortex-M55, Cortex-M33, Cortex-M7, or Cortex-M4 — packs between 32 and 256 configurable compute units capable of achieving up to 32 times machine learning performance uplift compared with the Cortex-M55. Pitted against the Cortex-M7, Arm claims that the Ethos-U55 and Cortex-M55 are up to 50 times faster in terms of speed to inference and up to 25 more energy efficient in tasks like voice activity detection, noise cancellation, two-mic beamforming, echo cancellation, equalizing, mixing, keyword spotting, and automatic speech recognition. On the software side, both the Ethos-U55 and Cortex-M55 benefit from a unified software development flow that folds embedded code, digital signal processor code, and AI model code into one. Importantly, it plays nicely with popular machine learning frameworks like Google’s TensorFlow and Facebook’s PyTorch, plus Arm’s own solutions. The unveiling of the Ethos-U55 comes after Arm took the wraps off of the Ethos-N57 and Ethos-N37 , which both feature voice recognition and always-on capabilities. (Ethos, which launched recently, is a product suite focused on solving complex AI compute challenges with sensitivity to battery life and cost.) Around the same time in October, the company revealed its roadmap for Mbed OS , the embedded software operating system that’s meant to serve as a foundation for smart and connected devices. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,039
2,021
"Arm expands offerings in IoT, virtual hardware, and 5G | VentureBeat"
"https://venturebeat.com/2021/10/18/arm-expands-offerings-in-iot-virtual-hardware-and-5g"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Arm expands offerings in IoT, virtual hardware, and 5G Share on Facebook Share on X Share on LinkedIn Arm is offering total solutions for IoT. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Arm is releasing new chip design offerings in the internet of things (IoT), virtual hardware, and 5G sectors. Cambridge, United Kingdom-based Arm designs the architecture that other licensed chip makers use to build their chips. Arm likes to make it easier for those licensees to come up with their applications and create a foundation for an IoT economy. So the company said its Arm Total Solutions for IoT now delivers a full-stack solution to significantly accelerate the development and return-on-investment for IoT chip products. And Arm Virtual Hardware removes the need to develop on physical silicon, enabling software and hardware co-design and accelerating product design by up to two years, the company claimed. Arm said a new ecosystem initiative called Project Centauri will drive the standards and frameworks needed to grow serviceable markets and scale IoT software innovation. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Through a radical change in how systems are designed, Arm is uniquely positioned to fuel a new IoT economy that rivals the shape, speed, and size of the smartphone industry’s app economy,” said Mohamed Awad, vice president of IoT and embedded at Arm. “Arm Total Solutions for IoT changes the way we’re delivering key technology to the entire ecosystem and demonstrates our significant and ongoing investment in the software that will empower developers to innovate for global impact.” Virtual hardware designing Arm Virtual Hardware brings modern agile software development methodologies like continuous integration/continuous deployment (CI/CD), DevOps, and MLOps to IoT and embedded platforms without having to invest in complex hardware farms. The new cloud-based offering delivers a virtual model of Arm’s Corstone subsystem to enable software development without the need for physical silicon. With accurate models of Arm-based chip designs providing mechanisms for simulating memory, peripherals, and more, development and testing of software can now occur before silicon availability. This ultimately reduces a typical product design cycle from an average of five years to as little as three years. Arm says Corstone, its validated and integrated subsystem, has accelerated the time to market for more than 150 designs from the company’s silicon partners. Chip market growth To date, Arm partners have shipped more than 70 billion devices based on the Arm Cortex-M series. That shows no sign of slowing, as chips for IoT are expected to have an average compound annual growth rate (CAGR) of nearly 15% through 2026, according to analyst firm Mordor Intelligence. Meanwhile, Arm also said the majority of 5G infrastructure deployments will be powered by Arm-based chips. Commercial 5G wireless networks are now live in more than 60 markets around the world. On top of that, the number of 5G connections is expected to reach 692 million globally by the end of this year. With support from key players in the industry, Arm unveiled its new Arm 5G Solutions Lab in partnership with Tech Mahindra , a big provider of engineering services. The Arm 5G Solutions Lab will focus on accelerating innovation for network infrastructure by providing a place for Arm’s hardware and software ecosystem partners to come together and demonstrate end-to-end 5G solutions in a live test environment. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,040
2,022
"DDoS attack was 'largest' ever in Ukraine, Russia suspected | VentureBeat"
"https://venturebeat.com/2022/02/16/ddos-attack-was-largest-ever-in-ukraine-russia-suspected"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DDoS attack was ‘largest’ ever in Ukraine, Russia suspected Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The distributed denial-of-service (DDoS) attack Tuesday against military and financial institutions in Ukraine was the “largest DDoS attack in the country’s history,” a Ukrainian government agency said. Ukraine “successfully stopped” the attack, the State Service of Special Communication and Information Protection of Ukraine said in a statement posted online. The DDoS attack affected targets including the websites of the Ministry of Defense and the Armed Forces of Ukraine, as well as the web services of Privatbank and Oschadbank. DDoS attacks typically attempt to bring down websites or networks by overwhelming the web server with traffic. The “main purpose is to sow panic among Ukrainians and destabilize the situation in the country,” the Ukrainian agency said in its statement. “In fact, it was a large-scale stress test that Ukraine withstood.” The DDoS attack came as Russia has amassed an estimated 150,000 troops near Ukraine, U.S. President Joe Biden said Tuesday. Russia has been known to use cyberattacks as part of military campaigns in the past, including in Georgia and the Crimean Peninsula in Ukraine. Most recently, Ukraine blamed Russia for attacks in January that left dozens of the government’s websites inaccessible or defaced. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cybersecurity experts say that if Russia does plan to invade Ukraine, it would undoubtedly use cyberattacks as a key part of its strategy — just as the country has done in previous military campaigns over the past decade-and-a-half. The secretary-general of NATO, Jens Stoltenberg, said there’s no evidence that Russia is pulling back on its forces near Ukraine, despite claims by the Russian military that it was starting to withdraw. “We do not see any sign of de-escalation on the ground,” Stoltenberg said, according to the BBC. ‘Trace of foreign intelligence’ Ilya Vityuk, head of the cybersecurity department for the Security Service of Ukraine (SSU), discussed the incident during a news conference Wednesday, which was reported on by news outlets including the New York Times. The Ukrainian agency’s statement posted online also included Vityuk’s comments, which said that with the DDoS attacks Tuesday, “There is a trace of foreign intelligence services.” “Based on current realities, the country that is interested in such image damage to Ukraine is Russia,” Vityuk said, according to the version of the statement posted online. “However, this should be established within the relevant investigation.” A separate statement from the SSU, however, raised greater suspicion about possible Russian involvement. “According to preliminary information, Russian special services may be involved,” the statement said , according to a translation. A spokesperson for the Kremlin denied Russian involvement in the DDoS attack, according to the New York Times report. “We know nothing about it, but we are not surprised that Ukraine is continuing to blame Russia for everything,” the spokesperson, Dmitri S. Peskov, reportedly said. “Russia has nothing to do with any DDoS attacks.” The attack targeting Ukrainian servers on Tuesday was indeed a powerful DDoS attack, according to findings from cyber firm CrowdStrike. “Telemetry acquired during the attacks indicates a large volume of traffic three orders of magnitude more than regularly observed traffic,” said Adam Meyers, senior vice president of intelligence at CrowdStrike, in an email statement. In the attack, 99% of the traffic consisted of HTTPs requests, “indicating the attackers were attempting to overwhelm Ukrainian servers,” Meyers said. Impact on western countries Though there is “no evidence of any targeting of western entities at this time, there is certainly potential for collateral impact as a result of disruptive or destructive attacks targeting Ukraine,” he said. “This could impact companies that have a presence in Ukraine, those that do business with Ukrainian companies, or have a supply chain component in Ukraine such as code development/offshoring.” On Tuesday, Biden touched on the possibility of Russian cyberattacks impacting the U.S. “If Russia attacks the United States or allies through asymmetric means, like disruptive cyberattacks against our companies or critical infrastructure, we are prepared to respond,” Biden said. Diversion tactic? The DDoS attacks might also be a “diversion from something else, like a stealthier cyberattack,” said Justin Fier, director of cyber intelligence and analytics at cyber firm Darktrace, in an email to VentureBeat on Tuesday. At Darktrace, “across our customer base, we sometimes see noisy attack techniques like this used to distract security teams while bad actors remain inside digital systems to carry out more deadly attacks behind the scenes,” Fier said. That can include stealing or altering sensitive data, shutting down critical systems, or “simply lying dormant until the right time comes,” he said. Ukraine’s cyber response The Ukrainian agency statement provided additional details on how the DDoS attack was defended against: After the Government Computer Emergency Response Team CERT-UA received a report of disruptions in a number of government websites, some information resources were suspended to prevent the attack from spreading. Modern, powerful systems for counteracting DDoS attacks were also used. This prevented attacks on other sites, including the websites of the Security Service of Ukraine, the Foreign Intelligence Service, etc. “We have significantly reduced the level of malicious traffic by restricting access control lists and configuring policies on anti-DDoS attacks. Our cleaning centers are working. Therefore, despite the fact that the attack is still ongoing, and its average power reaches tens of gigabits per second, the situation is completely under control: web resources continue to function” said Victor Zhora. Deputy Secretary of the National Security and Defense Council of Ukraine Serhii Demedyuk praised the response of the state cybersecurity system to the latest cyberattack. At the same time, national actors of cybersecurity not only in Ukraine but also in partner countries, including the USA and European countries, worked 24/7 to minimize the consequences of the cyberattack, which he described as “informational and psychological.” The Security Service of Ukraine “is investigating criminal proceedings on the fact of DDoS-attacks,” which “does not exclude the involvement of special services of the aggressor country,” the statement says. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,041
2,022
"Ukraine: We've repelled 'nonstop' DDoS attacks from Russia | VentureBeat"
"https://venturebeat.com/2022/03/07/ukraine-weve-repelled-nonstop-ddos-attacks-from-russia"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ukraine: We’ve repelled ‘nonstop’ DDoS attacks from Russia Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A Ukraine agency said S aturday that government websites have been hit with continuous distributed denial-of-service (DDoS) attacks, which the agency attributed to “Russian hackers,” since Russia’s invasion on February 24. However, “despite all the involved enemy’s resources, the sites of the central governmental bodies are available,” the State Service of Special Communication and Information Protection (SSSCIP) of Ukraine said in a tweet. Since the invasion, Ukraine’s government has been focusing much of its public communications around the Russia-provoked military conflict on the ground. The tweets, however, were an acknowledgment that Ukraine has continued to face attacks in the cyber realm, as well. It also appeared to be the first time that cyberattacks have been attributed to threat actors in Russia since the invasion began. DDoS attacks against military and financial institutions in Ukraine that took place prior to the invasion, on February 15-16, were attributed to the Russian government by officials in the U.S. and U.K. DDoS typically attempt to force websites or networks offline by overwhelming servers with traffic. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! ‘Nonstop’ attacks In its tweets on Saturday, the SSSCIP said that “Russian hackers keep on attacking Ukrainian information resources nonstop,” and have been doing so “since the beginning of [the] invasion.” The agency specified that the attacks have been DDoS attacks “primarily” aimed at the websites of the Ukrainian parliament (Verkhovna Rada), president Volodymyr Zelenskyy, the cabinet of ministers, the defense ministry and the internal affairs ministry of Ukraine. The “most powerful” DDoS attacks against Ukrainian government sites peaked at more than 100 Gbps, the SSSCIP said. While far above the average DDoS attack size, research from Radware shows that the largest DDoS attack recorded during the first three quarters of 2021 was 348Gbps — or 3.5 times the size of the most powerful DDoS attacks against Ukraine. The DDoS attacks against Ukraine are “definitely not setting any records,” said Chris Partridge , a security professional who has been tracking cyberattacks during the Russia-Ukraine conflict. “But I think it’s a good sign that Ukraine has been able to shrug some of these attacks off from Russia,” Partridge said in a message to VentureBeat. In the recent attacks, “the only thing the occupants managed to do was to substitute the front pages at the sites of some local authorities,” the SSSCIP said in a tweet, before adding : “We will endure! On the battlefields and in the cyberspace!” Meanwhile, hackers in Ukraine’s IT army and hacktivist groups such as Anonymous have continued hitting back with DDoS attacks against Russian targets. At last check, numerous government, financial and media websites targeted by the Ukraine IT army were seeing 0% or 10% uptime within Russia, according to data posted by Partridge on GitHub. Anonymous attack On Sunday, Anonymous claimed on Twitter to have replaced the live feeds for several Russian TV channels and streaming services with video footage from the war in Ukraine, along with a message opposing the war. Jeremiah Fowler, cofounder and senior security researcher at Security Discovery, told VentureBeat that his cybersecurity research firm did capture video of a Russian state TV channel feed that was hacked to display pro-Ukrainian information. “I would mark this claim [from Anonymous] as true, given that they most likely got to other channels too,” Fowler said in an email. As part of recent research into the efforts by hacker groups such as Anonymous to launch cyberattacks against Russia, Fowler said he was able to find the database of an internet and cable provider in Russia that contained ports and pathways, and source locations of where shows are streaming from. “It is highly possible that someone could hijack the feed and trick or spoof the channel to believe it is pulling programming from the legitimate source and instead show other video footage to viewers,” Fowler said. The cyber effort to aid Ukraine is also getting assistance from U.S. Cyber Command, The New York Times reported Sunday. “Cybermission teams” from the agency are currently working from Eastern European bases “to interfere with Russia’s digital attacks and communications,” according to the Times. Given that U.S. Cyber Command is a part of the Department of Defense, that raises that question of whether this makes the U.S. a “co-combatant,” the report noted. From The New York Times report : By the American interpretation of the laws of cyberconflict, the United States can temporarily interrupt Russian capability without conducting an act of war; permanent disablement is more problematic. But as experts acknowledge, when a Russian system goes down, the Russian units don’t know whether it is temporary or permanent, or even whether the United States is responsible … Government officials are understandably tight-lipped [about what Cyber Command is doing], saying the cyberoperations underway, which have been moved in recent days from an operations center in Kyiv to one outside the country, are some of the most classified elements of the conflict. But it is clear that the cybermission teams have tracked some familiar targets, including the activities of the G.R.U., Russia’s military intelligence operations, to try to neutralize their activity. Guidance for U.S. In the U.S., the federal Cybersecurity and Infrastructure Security Agency (CISA) has also been providing guidance around vulnerabilities that may be tied to threats coming out of Russia, potentially in retaliation for western sanctions over Ukraine. Last Thursday, CISA added 95 vulnerabilities to its Known Exploited Vulnerabilities Catalog. It’s unusual for the agency to add “more than a handful” of vulnerabilities to their catalog at one time, said Mike Parkin, senior technical engineer at Vulcan Cyber. Coming amid the situation in Ukraine, “these additions are likely an effort to prevent cyberwarfare activities spilling into U.S. organizations covered by CISA directives,” Parkin said. The 95 vulnerabilities added to the CISA catalog on Thursday all have a short deadline for remediation by federal agencies – within March, Viakoo CEO Bud Broomhead noted. And most are in widely used systems, including 38 for Cisco products, 27 for Microsoft products and 16 for Adobe products, Broomhead said. Thus far, there is “no direct evidence that state, state-sponsored, or other threat actors friendly to Russia have attacked U.S. resources, there is no reason to assume they will not do so,” Parkin told VentureBeat. “[But] given that there are already extensive cyberwarfare activities between Russia and Ukraine and their supporters on both sides, it’s highly likely allies on both sides will become targets of the cyber conflict.” Many of Russia’s allies also consider the U.S. an adversary on some level, and have their own well-equipped and well-financed cyberwarfare capabilities, he said. “With all of that, it is likely that CISA included threats that were not previously considered high-risk as threat actors look for additional attack vectors,” Parkin said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,042
2,022
"Is Russia exploring cyberattacks against U.S. in response to hacktivists? | VentureBeat"
"https://venturebeat.com/2022/03/21/is-russia-exploring-cyberattacks-against-u-s-in-response-to-hacktivists"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Is Russia exploring cyberattacks against U.S. in response to hacktivists? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. President Joe Biden warned today that a wave of new Russian cyberattacks against targets in the U.S. could be getting closer. But nearly a month into Russia’s assault on Ukraine , this raises the question: Why now? And are such attacks potentially coming as a response to sanctions against Russia over Ukraine — or something more? Cybersecurity industry veteran Mike Hamilton thinks it’s likely to be the latter. And that “something more,” he says, could be the hacking efforts by volunteers such as the Anonymous hacktivist group. “Part of this may be driven by the pretext that has been provided by an army of volunteers,” said Hamilton, founder and CISO at security firm Critical Insight, and formerly the vice-chair for the DHS State, Local, Tribal and Territorial Government Coordinating Council. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “After Anonymous has gone after pipelines, the Russian space agency, electric vehicle charging stations, broadcast television and unsecured printers, it is credible to claim that this is an aggressive action by the United States and retaliation may be under consideration,” Hamilton said in comments provided via email. Today, Biden released a statement saying his administration is in possession of “evolving intelligence that the Russian Government is exploring options for potential cyberattacks.” This has prompted Biden to reiterate previous warnings that “Russia could conduct malicious cyber activity against the United States, including as a response to the unprecedented economic costs we’ve imposed on Russia alongside our allies and partners.” “It’s part of Russia’s playbook,” Biden said in the statement. The U.S. federal government has been warning for weeks that Russia may retaliate against the U.S. for the country’s support of Ukraine, and actions taken to impose a financial cost on Russia for its unprovoked assault on its neighboring country. Blaming the U.S. Cyber experts, however, have also been suggesting for weeks that there’s a risk of Russia incorrectly attributing, or otherwise holding the U.S. responsible for, the cyberattacks that hacktivists and other Ukraine-supporting hackers have been carrying out against Russia. “It’s difficult, if not impossible to quickly determine where an attack came from, or who was behind the attack,” said John Dickson, vice president at Coalfire, in a previous email to VentureBeat. “Things can get messy quickly. And the risk of ‘hack back’ cyberattacks from the Russians directed toward the U.S. and west becomes more likely.” With Biden’s statement today, that likelihood seems to now be higher. “The language in the announcement by the White House is beginning to edge up on ‘specific and credible’ threats,” Hamilton said — though he said it’s notable that the statement does cite “evolving intelligence.” “My Administration will continue to use every tool to deter, disrupt, and if necessary, respond to cyberattacks against critical infrastructure. But the Federal Government can’t defend against this threat alone,” Biden said in the statement. “Most of America’s critical infrastructure is owned and operated by the private sector and critical infrastructure owners and operators must accelerate efforts to lock their digital doors.” For more than a month now, the Cybersecurity and Infrastructure Security Agency (CISA) has been calling on businesses and government agencies to put “ shields up ” in the U.S. Today, CISA director Jen Easterly said that Biden’s statement “reinforces the urgent need for all organizations, large and small, to act now to protect themselves against malicious cyber activity.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,043
2,021
"Why an emerging cloud security trend offers 'good news' to businesses | VentureBeat"
"https://venturebeat.com/2021/11/23/why-an-emerging-cloud-security-trend-offers-good-news-to-businesses"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why an emerging cloud security trend offers ‘good news’ to businesses Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. While the cloud security market has developed rapidly in recent years, there’s now a wide array of tools to juggle for securing cloud infrastructure and applications. There are “too many tools,” in fact, said Neil MacDonald, a vice president and analyst at Gartner, speaking at the research firm’s Security & Risk Management Summit — Americas virtual conference last week. Now, however, there’s major consolidation underway in the cloud security tools market, a trend that is “good news” for enterprises, MacDonald said. In response to cloud security challenges and the growing popularity of the cloud — Gartner estimates 70% of workloads will be running in public cloud within three years, up from 40% today — the demand for cloud security has surged. Research firm MarketsandMarkets forecasts that cloud security spending will reach $68.5 billion by 2025 , up from $34.5 billion last year. But the cloud security tools, and acronyms, are numerous. There’s CSPM (cloud security posture management) for spotting misconfigurations in cloud infrastructure. There’s CIEM (cloud infrastructure entitlements management) for managing cloud identities and permissions. There’s CWPP (cloud workload protection platforms) for securing virtual machines, containers, and serverless functions. And there are additional tools to proactively identify vulnerabilities during app development, such as tools for scanning containers and Infrastructure as Code (IaC). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But now, instead of needing to acquire these different tools and find a way to use them all together, the idea is to have one platform to rule them all: CNAPP. That stands for cloud-native application protection platform, and it’s an offering that includes all of the tools mentioned above. Or at least, that’s starting to be the case — with many vendors in the process of assembling the different pieces into a CNAPP whole (more on that below). Vendors in the emerging CNAPP space include some of the best-funded startups in cybersecurity along with some of the most well-established companies in the security industry. Gartner coined the term CNAPP earlier this year — partly in recognition of what was already happening in the market, and partly to encourage further consolidation of cloud security tools under the CNAPP umbrella. “These walls are coming down,” MacDonald said. “We need to think of cloud-native application protection as a lifecycle problem from development into operations. And there are vendors now that can do most of everything [that’s part of CNAPP].” Cloud security challenges While enterprises have accelerated their shift to the cloud during the pandemic, cloud security remains a foremost challenge. A recent survey of cloud engineering professionals found that 36% of organizations suffered a serious cloud security data leak or a breach in the past 12 months. Likewise, a recent Gartner survey found that more than a third of companies see lack of security readiness as an obstacle to public cloud migration — ranking as the most common challenge to cloud cited in the survey. Thus, for customers, the cloud security trend of unifying disparate tools so there are fewer to deal with is worth considering, MacDonald said. “I think you should have fewer vendors, not more security vendors — do not mistake more security vendors for ‘defense in depth,'” he said, referring to the cybersecurity strategy of deploying multiple layers of defense. “But it also means you should be open to switching vendors, consolidating vendors, switching to one that understands your needs.” Many cyber vendors have already embraced the CNAPP concept, saying that ultimately, the customers win with a unified offering in the cloud security realm. Some — such as Palo Alto Networks, Aqua Security, and Orca Security — were already offering the key components of CNAPP prior to Gartner coining the term. For instance, Aqua Security describes its offering, the Aqua Platform, as a “complete” cloud-native application protection platform. And the vendor has seen “high double-digit” revenue and customer growth for its CNAPP so far this year, said Rani Osnat, senior vice president of strategy at the 450-person company. “Customers are looking for a broader platform,” Osnat said. “Even customers that are relatively in the beginning of their journey understand that from a vision standpoint, they don’t want to slice this up into too many little pieces.” Simplifying cloud security Freelance services marketplace Fiverr adopted Orca Security’s platform in part to help simplify the process of ensuring cloud security, said Shahar Maor, chief information security officer at Fiverr, in a statement to VentureBeat. “There are a lot of complexities in securing public cloud environments,” Maor said. “The value of a CNAPP like Orca Security is that I’ve got a single, comprehensive solution to identify risk, as well as provide actionable insights and value across IT, DevOps, and engineering.” Along with Orca Security, Palo Alto Networks, and Aqua Security, other vendors offering the capabilities that fall under CNAPP include Lacework, McAfee Enterprise, Qualys, Sonrai Security, and Wiz. Aqua Security Aqua Security has offered capabilities for scanning applications during development, including IaC security scanning, since the launch of the company in 2015. In terms of workload protection, Aqua focused on containers at the beginning and added serverless and VMs in 2017 to give it full CWPP capabilities. The company added CSPM through the acquisition of CloudSploit in 2019. Recent enhancements to Aqua’s CNAPP offering have included cloud-native detection and response, which provides monitoring and detection to identify zero-day attacks in cloud-native environments. “One of the things that make CNAPP such a ‘gospel’ in this market is that unlike traditional security solutions in the past, it covers a very broad set of personas,” Osnat said. “It spans developers and DevOps to cloud admins and security personnel. And that is quite unique in the market. So while nobody expects developers to become security experts, by helping developers embed security into their CI/CD processes, you help solve the problem.” In March, Aqua Security raised $135 million in series E funding at a $1 billion valuation. Lacework Lacework, which was founded in 2014, started out in CWPP and later added CSPM. “We began by addressing CWPP use cases with automation, without requiring the use of any rules/policies,” said Adam Leftik, vice president of product at Lacework, in an email to VentureBeat. “We later added in CSPM and vulnerability management capabilities with all of the insights necessary to efficiently handle compliance, audit, and risk management needs.” Other additions have included IaC remediation capabilities through the acquisition of Soluble earlier this month, along with other features including an inline vulnerability scanner to help developers find and fix vulnerabilities in their CI/CD pipelines. “CNAPP represents a mindset shift toward a security approach that includes everyone involved in the business,” Leftik said. “Enterprises have an opportunity to completely rethink their security approach as one overarching continuum throughout development and operations rather than one-off problems that have to be fixed with manual, rules-based processes. As more customers embrace cloud and build in containers, there will be more demand for platforms that can protect cloud-native applications across development and production.” Lacework raised $1.3 billion in funding earlier this month — one of the largest venture rounds in the U.S. this year — at an $8.3 billion post-money valuation. That followed the company’s $525 million fundraise in January. McAfee Enterprise McAfee Enterprise began offering CWPP in early 2017 and added CSPM functionality to the offering in early 2019. The McAfee Enterprise MVision CNAPP also includes container security capabilities via the acquisition of NanoSec in 2019, and data loss prevention capabilities via the acquisition of Skyhigh Networks in 2018. In March, MVision CNAPP added in-tenant DLP scanning facilitating for increased data security, privacy, and cost optimization. “As organizations continue to benefit from moving more workloads to the cloud, cloud threats are also on the rise,” said Dan Frey, product marketing engineer at McAfee Enterprise and FireEye, in an email to VentureBeat. “McAfee Enterprise expects adoption of MVision CNAPP to continue in step with customer requirements and cloud adoption rates.” In October, McAfee Enterprise was combined with cybersecurity firm FireEye in a deal orchestrated by their owner, private equity firm Symphony Technology Group. Symphony had acquired McAfee’s enterprise security business in March for $4 billion. Orca Security Orca Security has had CSPM, CWPP, and CIEM since its founding in 2019. “We were a CNAPP before the term existed, and we are excited to see the official emergence and recognition for the category,” said Avi Shua, cofounder and CEO of Orca Security, in an email to VentureBeat. The company recently enhanced its identity and access management risk detection capabilities to cover misconfigurations, events and anomalies, and access traversal. Additionally, a new CI/CD offering includes detection of security issues in the developer pipeline and during deployment before reaching production. “Security teams are overwhelmed with thousands of meaningless, disconnected alerts,” Shua said. “With a CNAPP, customers can focus on the alerts that matter, get more functionality with fewer cloud security tools — and can finally address the cost and complexity of managing disparate tools.” In October, Orca Security extended its series C round to $550 million at a $1.8 billion post-money valuation. Palo Alto Networks Palo Alto Networks introduced its Cloud Native Security Platform, Prisma Cloud, in November 2019, combining CSPM capabilities from its RedLock and Evident.io acquisitions with CWPP capabilities from its Twistlock and PureSec acquisitions. The company added capabilities including CIEM with Prisma 2.0 in 2020. Then last week, Palo Alto Networks debuted Prisma Cloud 3.0 — which it described as a CNAPP — with enhancements including the integration of CIEM for Azure and IaC security. “Customers today have been using a large number of point solutions to address cloud security requirements ad hoc,” said Ankur Shah, senior vice president and general manager of Prisma Cloud at Palo Alto Networks, in a statement to VentureBeat. “As customers build their overall strategy, they want to use a CNAPP that provides comprehensive security across multi-cloud and hybrid-cloud environments.” The publicly traded company currently has a market capitalization of $51.98 billion. Qualys Qualys has been offering CWPP for virtual machines running in the public cloud for the past five years. The company extended the solution to support container workloads and introduced CSPM in 2018. Recent additions to the Qualys CNAPP offering have included detecting misconfigurations in IaC, compliance for containers, and risk-based venerability management. “With an increasing number of organizations charting the course for their cloud journeys — and no sign of stopping or slowing — securing this journey has become one of the top concerns of customers. With this new focus, there is an increasing opportunity for vendors to address this concern with solutions such as CNAPP,” said Parag Bajaria, vice president of cloud and container security at Qualys, in an email. “Cloud security is fragmented into multiple categories and various point products that address those categories. Due to this complexity, there is often a large amount customer confusion. As a result of this confusion, Qualys is increasingly seeing customers ask for a single consolidated solution.” The publicly traded company currently has a market capitalization of $5.34 billion. Sonrai Security Sonrai Security, which was founded in 2018, started out in CIEM and later added CSPM. The Sonrai Dig offering also includes data security, and the startup “will soon announce new capabilities to our CIEM, CSPM, and data security platform,” said Brendan Hannigan, CEO and cofounder of Sonrai Security, in an email to VentureBeat. “Cloud security offerings like Sonrai Dig hold the entire future for cloud security specifically and security in general,” Hannigan said. “Old-world data center solutions increasingly will become irrelevant as digital disruption expands the cloud while data centers and enterprise networks decline.” Sonrai Security announced a $50 million series C funding round in October. Wiz Wiz has provided CSPM and CWPP functionality since its founding in 2020. The startup has mainly focused on expanding its CWPP capabilities, recently introducing the ability to scan workloads for malware without needing to install any agents. “CNAPP will become the de facto cloud security product,” said Yinon Costica, cofounder and vice president of product at Wiz, in an email to VentureBeat. “It will extend all the way from cloud environments to the code developers are writing. The big opportunity here is to drastically simplify cloud security in a way that lets business move faster than ever before — but securely this time. The fragmented approach we had before could never do that.” In October, Wiz raised a $250 million series C funding round at a post-money valuation of $6 billion. That followed the company’s $130 million series B round in March. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,044
2,021
"What Log4Shell teaches us about open source security | VentureBeat"
"https://venturebeat.com/2021/12/18/what-log4shell-teaches-us-about-open-source-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest What Log4Shell teaches us about open source security Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A serious security vulnerability is discovered in a piece of open-source software — widely used behind the scenes on the internet but little known to the average person — that can give attackers access to a treasure trove of sensitive data. The incident exposes how a vulnerability in a seemingly simple bit of infrastructure code can threaten the security of banks, tech companies, governments, and pretty much any other kind of organization. Companies race to fix the problem but fear it will plague the internet for years. Sounds like Log4Shell , the previously unknown flaw in a ubiquitous and free program that has been freaking out experts since it came to light last week, right? Yes, but it also describes an eerily similar episode from 2014. Remember Heartbleed ? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Heartbleed was a bug in OpenSSL , the most popular open-source code library for executing the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols used in encrypting websites and software. The flaw, which allowed hackers to trick a vulnerable web server into sending them encryption keys and other confidential information, was linked to several attacks, including one on a large U.S. hospital operator that resulted in the theft of 4.5 million healthcare records. Researchers at Google and software company Codemonicon independently discovered the vulnerability and reported it in April 2014. After Heartbleed came to light, the world wondered how malicious actors were able to compromise a piece of software so essential to the internet’s secure operation. To many, the incident also raised questions about the security of all open-source software. Fast forward to December 2021 and those same questions are surfacing. Like OpenSSL, Log4j — the Java program compromised by the Log4Shell bug — is a widely used, multi-platform open-source library. Developed and maintained under the auspices of the all-volunteer Apache Software Foundation, Log4j is deployed on servers to record users’ activities so they can be analyzed later by security or development teams. Hackers could use the flaw to access sensitive information on a variety of devices, plant ransomware attacks , and take over machines to mine crypto currencies. The vulnerability was discovered almost by happenstance, when Microsoft announced it had found suspicious activity in Minecraft: Java Edition, a popular video game it owns. Jen Easterly, director of the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency, said , “To be clear, this vulnerability poses a severe risk… We urge all organizations to join us in this essential effort and take action.” As with Heartbleed, Log4Shell illustrates how the prevalence of open-source software in enterprises around the world — programs like OpenSSL and Log4j and the multitude of code that depends on them in modern software development — has increasingly made it a favorite attack target. Nearly every organization now uses some amount of open source, thanks to benefits such as lower cost compared with proprietary software and flexibility in a world increasingly dominated by cloud computing. Open source isn’t going away anytime soon — just the opposite — and hackers know this. As for what Log4Shell says about open-source security, I think it raises more questions than it answers. I generally agree that open-source software has security advantages because of the many watchful eyes behind it — all those contributors worldwide who are committed to a program’s quality and security. But a few questions are fair to ask: Who is minding the gates when it comes to securing foundational programs like Log4j? The Apache Foundation says it has more than 8,000 committers collaborating on 350 projects and initiatives, but how many are engaged to keep an eye on an older, perhaps “boring” one such as Log4j? Should large deep-pocketed companies besides Google, which always seems to be heavily involved in such matters, be doing more to support the cause with people and resources? And, finally, why does it always seem to take the disclosure of a vulnerability in an open-source program before the world realizes how critical that program is? Is the industry doing enough to recognize what those software packages are and prioritizing their security? Log4Shell, like Heartbleed before it, demonstrates that, if nothing else, these questions should be asked and answered. Justin Dorfman is open source program manager at cybersecurity company Reblaze. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,045
2,022
"How to stop the spread of ransomware attacks | VentureBeat"
"https://venturebeat.com/2022/02/06/how-to-stop-the-spread-of-ransomware-attacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How to stop the spread of ransomware attacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Harman Singh, director of Cyphere. Ransomware is currently one of the most common types of cyberattacks. It’s essential to be aware of the different variations of ransomware and how they can affect businesses, particularly small and midsized enterprises. As such, let’s outline what ransomware is, why it’s so dangerous for business owners, and identify steps that you can take to protect your company against this threat. What is ransomware? Ransomware is malware that infects devices and locks users out of their data or applications until a ransom is paid. This is costly for businesses because they may have to pay a large sum of money to regain access to their files. It has been revealed that some users have paid enormous fees to obtain the decryption key. The fees can range from a hundred dollars to thousands of dollars, which are typically paid to cybercriminals in bitcoin. Examples of ransomware attacks Some major ransomware attacks include: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! WannaCry A devastating Microsoft exploit was utilized to create a worldwide ransomware virus that infected over 250,000 systems before a kill switch was activated to stop its growth. Proofpoint assisted in locating the sample used to discover the kill switch and in analyzing the ransomware. CryptoLocker CryptoLocker was the first ransomware of this generation to demand Bitcoin for payment and encrypt a user’s hard drive as well as network drives. The CryptoLocker ransomware spread via an email attachment that purported to be FedEx and UPS tracking notifications. In 2014, a decryption tool became available for this malware. NotPetya The NotPetya ransomware attack is one of the most harmful techniques. It’s known for corrupting and encrypting the master boot record of Microsoft Windows-based systems. NotPetya is distributed via the same exploit as WannaCry to quickly spread and demand payment in bitcoin to reverse its modifications. Bad Rabbit Bad Rabbit was visible ransomware that employed similar code and vulnerabilities to NotPetya, spreading across Ukraine, Russia, and other countries. It primarily targeted Ukrainian media organizations, rather than NotPetya. It was spread via a fraudulent Flash player update that might infect users through a drive-by attack. History of ransomware The first ransomware program was distributed in 1989 by the AIDS Information Trojan, which used a modified version of the game “Kukulcan,” disguised as an erotic interactive movie. In 2006, malware called Gpcode.AG began to appear, which installed browser helper objects and ransom notes through rogue Firefox extensions hosted on sites such as Download.com and Brothersoft.com, as well as through emails with malicious attachments. In March 2012, police in Southampton, England, arrested two men on suspicion of creating a ransomware program called Reveton. The program was first identified by the Russian security firm Kaspersky Lab, which named it “Icepol.” In May 2012, Symantec reported they discovered ransomware called “Troj Ransomware,” which encrypted data on victims’ computers and demanded ransom payments in Bitcoin. In August 2013, a ransomware variant of the crypto locker ransomware was discovered that targeted users of Mac OS X. In December 2013, reports indicated that the ransomware attack had infected more than 16,000 computers in Russia and neighboring countries. Following that, in January 2014, security researchers reported that a new ransomware program called CryptoLocker was being distributed through emails on a massive scale. The encrypted ransomware files on the infected system and then demanded ransom payments in Bitcoin, to be paid within three days, or the price would double. Ransomware became extensively popular during 2016, with several new ransomware variants of CryptoLocker being released, as well as numerous other versions appearing over different periods throughout that year. In May 2017, the WannaCry ransomware cryptoworm assaulted computers running the Microsoft Windows operating systems. Types of ransomware There are different types of ransomware, but the most common ones can be broken down into the following categories: File encryption This type of ransomware encrypts files on the victim’s computer and then demands ransom payments to decrypt them. Screen lockers This type of ransomware displays a screen that locks the victims out of their computers or mobile devices and then demands ransom payments to unlock it. Mobile ransomware This type of ransomware is a version of “ransomware” that encrypts files on the hard drive of an infected mobile phone or tablet computer. Once the ransom payment has been paid, the victims can regain access to their devices. DDoS ransom This type of ransom malware does not encrypt files on the victim’s computer, but instead uses a botnet to bombard servers with so much traffic that they cannot respond. Ransomware-as-a-Service (RaaS) RaaS is apparently the latest business model for cybercriminals. It allows them to create their own ransomware and then either use it themselves or sell it to other parties who can execute cyberattacks. How do ransomware attacks work? There are different ways that it can infect a computer, but the most common way is through emails with malicious software or attachments. The ransomware virus will be attached to an email as an executable file (such as .exe or .com), and when the victim opens the email, it will automatically run on their computer. Once, the virus has infected a computer, it will typically: Encrypt files on the victim’s hard drive. Display a ransom note that demands payment to decrypt them (or demands ransom payments in another form). The ransom note may also provide decryption information and instructions if they type “DECRYPT” or “UNLOCK.” Some ransomware programs do not provide this information. Disable system functions such as the Windows Task Manager, Registry Editor and Command Prompt. Block access to malicious websites that provide information on how to remove ransomware or decrypt files without paying the ransom. Who is a target for ransomware attacks? Ransomware threats are becoming increasingly common, and ransomware attackers have a variety of options when it comes to selecting the organizations they target. Occasionally, it’s simply a matter of chance: attackers may choose universities since they frequently have smaller security teams and a diverse user base that does a lot of file-sharing of research data, student information, and other Person Identifiable Information (PII) from staff, students, and researchers. Similarly, government agencies and hospitals tend to be frequent targets of ransomware, as they typically need immediate access to their documents. This means they’re more likely to pay the ransom. For example, law enforcement firms and other businesses with sensitive data may be willing to quickly pay money to keep information on a data breach secret, which means these businesses may be particularly susceptible to leakware assaults. Leakware attacks use malware designed to extract sensitive information and send it to attackers or remote instances of malware. How to prevent ransomware attacks There are different ways that a person can protect their computer from ransomware or block ransomware, and the best way to prevent a ransomware attack is to be prepared. Follow the points below to prevent ransomware: Back up your files regularly — this will help ensure that you don’t lose your data if it is encrypted by ransomware. Ensure that your antivirus software is updated frequently. Change the passwords for your important accounts regularly and use a strong, unique password for each of them (or use a recommended password generator). Password managers should be mandatory to generate and store sensitive information securely. Never share any passwords with anyone, or write them down where others could find them. Passwords should be at least 16 characters long, including upper and lowercase letters, numbers, and symbols. Be cautious when you’re opening emails, and never open a malicious attachment from unknown senders. If you are uncertain whether an email is legitimate, contact the company directly to verify its authenticity. Disable macros in Microsoft Office programs. Install security software that can help protect your computer from ransomware attacks. A strategic recommendation would be to ensure that people, processes, and technological controls work together. Principles such as the principle of the least privilege (PoLP), defense in-depth, and secure multilayered architecture are some basics to achieve such changes. Regular penetration testing helps an organization to see its blind spots and ensure all risks are identified and analyzed before risk mitigation is exercised. Ransomware infections are sophisticated for general users; it will not be mathematically possible for anyone to decrypt these infections without access to the key that the attacker holds. Harman Singh is the director of Cyphere. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,046
2,022
"Hacking groups launching 'cyber proxy war' over Ukraine attacks by Russia | VentureBeat"
"https://venturebeat.com/2022/02/25/hacking-groups-launching-cyber-proxy-war-over-ukraine-attacks-by-russia"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hacking groups launching ‘cyber proxy war’ over Ukraine attacks by Russia Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Russia’s unprovoked invasion of Ukraine is leading hacking groups worldwide to increase their activities — in some cases to support a side, or possibly just to capitalize on the chaos. Since the invasion of Ukraine earlier this week, the Anonymous hacker collective, the Conti ransomware gang and a threat actor in Belarus are among those that appear to have gotten more active — or at least expressed intentions to be. Meanwhile, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) issued a warning Thursday about a growing threat from an Iranian advanced persistent threat (APT) actor. During the Cold War, “the superpowers fought many small wars by proxy,” said Sam Curry, CSO at Cybereason. “Today, we can expect a cyber proxy war to emerge.” Anonymous Anonymous has declared itself aligned with “Western allies” and said it would only target operations in Russia. The group has posted a number of claims on Twitter. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The Anonymous collective is officially in cyber war against the Russian government,” the group tweeted. On Thursday, Anonymous claimed on Twitter that it brought down numerous websites associated with the Russian government. Those included a state news site, RT News, which reportedly confirmed that it had experienced a distributed denial-of-service (DDoS) attack. Calling the news site “propaganda,” Anonymous said the DDoS attack was carried out “in response to Kremlin’s brutal invasion of #Ukraine.” Then on Friday, Anonymous tweeted that it has “successfully breached and leaked the database of the Russian Ministry of Defence website,” and claimed to have posted “all private data of the Russian MOD.” (The tweet was subsequently taken down because it “violated the Twitter Rules,” the site says.) The group had earlier tweeted a video , featuring its signature Guy Fawkes-masked figure, saying that “if tensions continue to worsen in Ukraine, then we can take hostage industrial control systems.” The involvement of Anonymous is not a surprise, since the group is “well-known for having a principled position on topics and then acting or retaliating via the Internet,” said Casey Ellis, founder and CTO at Bugcrowd. A second hacker group that has purportedly disclosed intentions to support Ukraine is Ghost Security, also known as GhostSec, which is believed to be an offshoot of Anonymous. Conti Also unsurprisingly, Conti — believed to be a state-sponsored group operating out of Russia responsible for hundreds of ransomware attacks in recent years — threw its support behind the Russian side. According to reports, Conti posted a message on its site on the dark web, saying that “the Conti Team is officially announcing a full support of Russian government.” “If anybody will decide to organize a cyberattack or any war activities against Russia, we are going to use our all possible resources to strike back at the critical infrastructures of an enemy,” the message said, according to reports. The statement “represents the first major cybercriminal group to publicly back the Russian war effort,” said Chris Morgan, senior cyber threat intelligence analyst at Digital Shadows. It also comes after many warnings from U.S. officials, who’ve emphasized that “the risk from ransomware activity may escalate as sanctions impact Russia,” Morgan said. Digital Shadows has identified Conti as the second most active ransomware group in 2021, by number of victims, and has attributed several attacks against critical national infrastructure to the group — including the crippling ransomware attack against Ireland’s health service in May 2021. Conti’s position statement is “noteworthy in light of Russia’s recent crackdowns on cybercrime and ransomware,” Ellis said. “It signals to me that they are either acting independently as the other groups seem to be, or possibly operating with the Kremlin’s blessing.” Other groups Meanwhile, in Ukraine, the country’s Computer Emergency Response Team (CERT) blamed “UNC1151,” a hacking group whose “members are officers of the Ministry of Defence of the Republic of Belarus,” for a wave of phishing attacks. The attacks targeted Ukrainian military personnel, as well as “related individuals,” CERT said in a Facebook post. At least two other hackers groups have announced that they are supporting Russia: The Red Bandits (a self-described “cyber crime group from Russia,” which has tweeted claims about cyberattacks against Ukraine this week) and CoomingProject (a ransomware group described as “sporadically active”). MuddyWater In the midst of the Russian attacks on Ukraine on Thursday, CISA posted a warning about MuddyWater, a state-sponsored Iranian APT. The group has been observed “conducting cyber espionage and other malicious cyber operations targeting a range of government and private-sector organizations across sectors — including telecommunications, defense, local government, and oil and natural gas — in Asia, Africa, Europe, and North America,” CISA wrote in a post. The timing of the disclosure “is interesting with the Ukraine cyberattacks and conflict playing out in parallel,” said Drew Schmitt, principal threat intelligence analyst for GuidePoint Security. The disclosure suggests the possibility “this could be Iran stepping up operations based on a distracted world view,” though that’s not definitive, Schmitt said. In general, the development shows that as more nations are developing cyber capabilities, more are coming to play, according to John Bambenek, principal threat hunter at Netenrich. And “there is no better training ground for nation-state actors than playing in an active warzone,” Bambenek said. Seizing the opportunity Without a doubt, some groups — and nation-state actors — will use the Ukraine invasion as an opportunity to escalate their ongoing cyberattacks “amidst the global chaos,” said Richard Fleeman, vice president for penetration testing ops at Coalfire. Looking ahead, “I believe we will see the continual escalation,” Fleeman said. “These groups thrive on sentiment and will likely continue to build momentum based on their objectives.” Curry agreed, saying that more groups “will pile in, and in the confusion other actors will conduct operations with plausible deniability.” “Let’s all hope that sanity prevails, but let’s prepare our policies and our preparations with the expectation that peace is probably further away than anyone would like,” he said. Ellis said that one concern with the development is the relative difficulty of attribution in cyberattacks — as well as the possibility of incorrect attribution or “even an intentional false flag operation escalating the conflict internationally.” ‘Unprecedented’ situation While all sectors of industry should be aware of the possible repercussions from the increased hacker group activity, certain sectors in the west may be more likely to be targeted, Morgan said. The financial services sector and energy sector would be “at particular risk, should Russia-aligned threat groups target organizations they assess as equivalent to those impacted by western sanctions,” he said. In many ways, this is uncharted territory, given that the world hasn’t had a war like this occur at a time when cyber capabilities were so advanced and widespread. It’s an “unprecedented” situation, said Danny Lopez, CEO of Glasswall — but it’s “not unexpected.” “Cyber has joined land, sea and air to become the fourth conflict theatre,” Lopez said. And whether it’s state-sponsored groups or their proxies, “I think cyber is the new war frontier,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,047
2,020
"Bug bounty platform Bugcrowd raises $30 million | VentureBeat"
"https://venturebeat.com/2020/04/09/bug-bounty-platform-bugcrowd-raises-30-million"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Bug bounty platform Bugcrowd raises $30 million Share on Facebook Share on X Share on LinkedIn Bugcrowd reward Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Bug bounty platform Bugcrowd has raised $30 million in a series D round of funding led by Rally Ventures. The announcement comes as the cybersecurity industry struggles with a growing skills gap , compounded by a rising number of cyberattacks that could cost the industry $6 trillion by 2021. This figure may rise even further if the recent shift to remote working becomes a more permanent trend. Cybersecurity officials from the U.S. and U.K. have warned that state-backed hackers and online criminals are taking advantage of the COVID-19 outbreak, using people’s anxiety to lure them into clicking on links and downloading attachments. Founded in 2012, San Francisco-based Bugcrowd is one of a number of crowdsourced bug bounty platforms that connect companies with “white hat hackers” to find and fix vulnerabilities for a fee. Bugcrowd claims a number of high-profile customers, including Twilio, Etsy, Tesla, Cisco, Pinterest, Atlassian, and Sophos. Bugcrowd’s platform offers instant access to additional cybersecurity capacity, and its bug hunters have always worked remotely, so the company is ready for the demands of remote working. “Crowdsourced security platforms are built to simultaneously enable a remote workforce and help organizations maximize their security resources while benefiting from the intelligence and insights of a ‘crowd’ of security researchers,” Bugcrowd CEO Ashish Gupta told VentureBeat. “In the current environment, a lot of companies don’t have the required resources to secure and test remote environments where the majority of business is now taking place.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Gupta, the rapid shift to remote work has driven increased demand for its platform, including an increase in customers looking for experts to test environments and provide advice on how to better secure data. In March, when many countries went into lockdown, Bugcrowd said it saw a 20% increase in vulnerability submissions compared to its previous record. Above: As vulnerabilities are uncovered by researchers, they are triaged to determine validity and severity. Permanent shift As with other companies and industries that have seen a boom in demand from the COVID-19 crisis, it’s too soon to say whether things will continue for Bugcrowd once the pandemic passes. However, any demand spike that can be attributed to the recent rise in remote working is actually set against a broader upward trajectory, which the company said led to “record year-over-year growth,” including a 100% increase in the North American enterprise market. “For many companies around the world currently, remote work is the new normal,” Gupta said. “This means organizations are quickly working to adapt business models and processes to securely enable their workforce. We believe that organizations that resisted remote working arrangements in the past will reconsider their position once the crisis starts to recede, given the cost and productivity benefits.” Bugcrowd had previously raised nearly $50 million, including a $26 million round two years ago , and its fresh cash injection will allow it to accelerate the expansion of its crowdsourced security platform. The raise also comes shortly after rival HackerOne secured an extra $36 million in financing and recently revealed that it had paid out around $40 million in bounties in 2019, roughly equal to the total paid out for all previous years combined. In January, Google announced that is had paid security researchers more than $21 million for bugs they found, nearly a third of which came in 2019 alone. The appetite for bug bounty programs is clearly growing, and as cybercriminals adapt their methods to the new environment, companies will have to adapt too. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,048
2,021
"CISOs must help their boards manage cyber risk -- here's how | VentureBeat"
"https://venturebeat.com/2021/04/24/cisos-must-help-their-boards-manage-cyber-risk-heres-how"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest CISOs must help their boards manage cyber risk — here’s how Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In one of the more memorable scenes from the film “Jerry Maguire,” Tom Cruise’s character, a football agent, can be seen pleading with his one client, begging him to just “help me, help you.” Maguire kept repeating the line, hoping to break through to the player, trying to convince him to change his attitude in the hopes it would help him land a big contract from his team. This scene came to mind recently when I was thinking about the relationship between CISOs and their boards of directors. Cyber attacks on a corporation can exact a high price — in money, reputation, and lost business. CISOs battle day and night to prevent their company from suffering a crippling cyber attack , yet too often they don’t receive the help or support they need to properly execute their roles. As a result, CISOs often can’t get enough money to hire staff and purchase the systems that can prevent cyberattacks , can’t raise consciousness among executives to pay attention to cybersecurity issues, and can’t persuade boards of directors to focus more of their attention on cybersecurity needs. For CISOs today to be successful, therefore, their responsibilities must not only include building a robust cyber defense strategy on a limited budget but also convincing their corporate boards of directors — the group eventually responsible for their budget — that cybersecurity needs to be a budgeting priority. Yet, according to a report issued by consulting firm EY , the board is not engaged in the cybersecurity debate. In the report, nearly half of CISOs said their board “does not yet have a full understanding of cybersecurity risk,” and that just 54% of organizations regularly schedule cybersecurity as a board agenda item. Getting the board onboard How then, can CISOs convince their boards that cybersecurity spending needs to be a priority, and how should they express that need in a way boards can relate to? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The first priority for CISOs to advance their objectives is to ensure that board members understand the business issues — and not just the IT issues — involved in cybersecurity, stressing the damage that a cyber attack can have on an organization. Using real-life case studies at quarterly board meetings will help drive the point home — such as the object lesson furnished by Yahoo’s 2013 data breach, perhaps the most expensive in history. That breach cost Yahoo $50 million in damages , paid to customers whose details were revealed; millions of dollars more in fees for free credit monitoring it agreed to supply victims as part of its settlement; and a $350 million discount in its sale price to Verizon. However, it is not enough for CISOs to highlight the potential damage a cyber attack can cause. Working with colleagues from across the company, they must also convincingly demonstrate the benefits that a robust cyber program can have for a business, stressing the opportunity to pursue additional revenue streams, target new customers, and upsell to existing clients. Along with the business aspects of cybersecurity, board members need to both better understand the threats and come to appreciate the steps required to mitigate those threats so they can make informed, strategic decisions for the business. CISO presentations to the board need to include a discussion of the constantly evolving threat landscape, with discussions focused on how hackers choose their victims, how they penetrate networks, which security systems are likely to prevent attacks, and how effective they are. What the board needs to see Just as the CEO presents budget and corporate strategy reports to directors, CISOs should present security plans, with details on how security teams plan to defend the company and what they can do to minimize damage if an attack does take place. Once boards understand the technical issues, they will be able to understand the strategies presented to them — and weigh in on whether even more needs to be done. To further make their case to board members, CISOs should propose a formal governance structure — similar to what the board would use for other business objectives — that will allow for effective reporting and analysis of data. That structure should include periodic audits and reviews, assigning ownership, ensuring that funding is adequate to meet challenges and needs, and developing monitoring mechanisms and accountability systems with measurable KPIs. Members of a board of directors usually get to that position because of their business acumen. But in today’s cyber-environment, that business experience must be filtered through the lens of the potential impact a cyber event can have on a company. By helping their board of directors have a “cyber-first” mentality, CISOs will help themselves, allowing their company to develop a healthier and more robust cyber posture. Ronen Lago is CTO at CYE. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,049
2,021
"We underestimated IoT security. Let’s not make that mistake with robotics. | VentureBeat"
"https://venturebeat.com/2021/08/28/we-underestimated-iot-security-lets-not-make-that-mistake-with-robotics"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest We underestimated IoT security. Let’s not make that mistake with robotics. Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. New commercial robots are changing what’s possible in the physical world. They are tackling increasingly complex tasks beyond early uses such as manufacturing assembly lines and material handling in warehouses. For example, ABB’s PixelPaint uses a pair of high-precision robotic arms to make car painting faster and more customizable. Adidas’s STRUNG is a textile-industry-first robot that uses athlete data to make perfectly fitting shoes. And this year, the world has watched in fascination as NASA’s one-ton rover Perseverance and tiny helicopter Ingenuity explored Mars. Artificial intelligence and other advances will accelerate robots’ ability to sense and adapt to their environment. Robotics companies are eagerly developing new machines with ever more impressive functionality. Many are built on the Robot Operating System (ROS), which is the de-facto open-source framework for robot application development. (My company chairs the initiative’s security group.) So the robot future is coming. But robot developers must not lose sight of a critical priority: security. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Robots are far more capable if connected to the internet. That allows them to work with other robots and access enterprise IT systems and the cloud so they can process and learn from huge amounts of remotely stored data. Connectivity also provides agility for quick bug patching or system reconfiguration. But even if placed behind a firewall, inadequately secured robots may not be safe. We’ve already seen malware that breaches isolated networks — for instance, the Stuxnet malware attack. But that occurred more than 10 years ago. Today’s malware is far more effective. If the malware has a hold on a network and a robot is the unpatched, unsecured link in the chain, the robot will open the door to attackers. The bottom line: We need to acknowledge that robots are vulnerable to cyberattacks. Imagine the damage that could be done if a hacker was able to maliciously hijack and control robots being used in, say, a healthcare setting. I worry that many companies, in their focus on development, are paying too little attention to crucial security questions as they approach production. With competition in the market heating up — worldwide spending on robotics is forecast to reach $210 billion by 2025, more than double the 2020 total — companies will be increasingly tempted to ship quickly without rigorously hardening the machines against attack. That could expose them to vulnerabilities such as hard-coded credentials, unencrypted development keys, no update path, and various security weaknesses. Another issue is complexity. Enabling security techniques such as full disk encryption, cgroups , AppArmor , and SECcomp is challenging. Someone has to configure those and set up the security policy. Robots are already complex enough. They’re built by mechanics and electronics engineers, and these arcane security technologies aren’t in their wheelhouse. The tech industry was also late in focusing on Internet of Things (IoT) security. Too many devices were shipped with weak password protection, an ineffective path and update system, and other flaws. Intrusions into smart devices and networks still continue. The security fates of IoT and robotics are actually intertwined as the Internet of Robotic Things (IoRT) emerges as a paradigm for combining intelligent sensors that monitor events happening around them with robots so they can receive more information to do their work. We can’t allow history to repeat itself. Just as the industry has come to realize that the IoT is an attack surface that must be safeguarded as carefully as any other enterprise system, we must ensure security is a high priority in robotics deployment. But how exactly? A big step involves the Robot Operating System. ROS to this point hasn’t been built with security in mind, but there’s a big opportunity to change that. Because ROS isn’t merely software but an international community of engineers, developers, and academics dedicated to making robots better, the robotics field can tap into an enormous pool of talent to optimize security. The community can identify vulnerabilities and report them, contribute hardening measures, follow and propose secure design principles, and apply recommendations from cybersecurity frameworks. Open source robotics will become as secure as the community wants it to be. Regulations could be helpful too. Innovation-driven regulation, based on the collective views and needs of developers and users, could help accelerate the development of open source robotics security. For example, a law on the books in the U.S., the IoT Cybersecurity Improvement Act , and a similar initiative in the U.K. should be expanded to address robotics security. The use of robots in many industries will continue to grow in the coming years. It’s unacceptable not to make security a top priority. Let’s learn from the mistakes of IoT and get it done. Gabriel Aguiar Noury is robotics and smart displays product manager at Canonical, the publisher of Ubuntu. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,050
2,021
"Zero trust: The trusted model for secure data-driven business | VentureBeat"
"https://venturebeat.com/2021/09/15/zero-trust-the-trusted-model-for-secure-data-driven-business"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zero trust: The trusted model for secure data-driven business Share on Facebook Share on X Share on LinkedIn Data security Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was written by Arvind Raman, CISO of Mitel. The pandemic has accelerated the evolution of Chief Information Security Officers (CISO) from traditional gatekeepers to business enablers and strategic counselors in our new, increasingly cloud-centric hybrid work environment, but this doesn’t mean we make security secondary. To the contrary, it’s heightened the need for a CISOs expertise. The massive shift to cloud adoption is leaving legacy organizations vulnerable to potential breaches, and security chiefs must find solutions that both protect and provide access to the important information that drives critical business decisions. Many are turning to a “ zero trust ” model to protect this critical data on which the business runs — in fact, 82% of senior business leaders are in the process of implementing this model, and 71% plan to expand it over the next year. Why? The name says it all. Zero trust doesn’t count anyone out as a threat. It’s about verifying and mitigating threats across hybrid clouds and edge devices both internally and externally. From traditional IT security to zero trust With a new business paradigm, CISOs are moving away from a traditional, react and respond IT security strategy to one that’s more proactive and supports long-term business goals. Traditional IT security models trust users who are inside organizations’ networks. Zero trust verifies users at multiple checkpoints to ensure the right person is receiving the right access. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In traditional IT environments, hackers can easily break through firewalls with stolen/compromised usernames and passwords causing data theft and reputation damage. When implemented effectively, zero trust allows authorized users to seamlessly and safely access company information from any device anywhere in the world. Think about zero trust like airport security checks, especially for international travel. To lessen threats and limit potential risks, we go through multiple security checkpoints prior to boarding. Once authorized, a zero trust model gives users access to only the data they need to do their jobs. This limits sprawling data surfaces and reduces areas of attack, which is important when weighing the growth of data with the challenge of understanding where data lives. The pandemic further accelerated the rate of data creation yet according to IDC, just 2% of that data was saved and retained in 2021. One of the biggest hurdles organizations face when implementing zero trust is lack of full visibility into an organization’s data and assets to begin with. Organizations with legacy infrastructure may have a tougher road in implementing zero trust but is definitely doable. The Biden administration’s recent executive order on the zero trust model as the answer to the post-pandemic security landscape has made doing so a business imperative. CISOs must establish maximum visibility into their organizational assets and work with internal teams to implement the principles of zero trust. What is most important to the organization for security? Balancing business needs and user experience are the key components to customizing zero trust. To effectively meet both needs, CISOs can ask the following questions: What are the business objectives? What are the top security risks impacting the business objectives and how can they be managed? What are the most important data assets in our organization? Where is the information stored and is it vulnerable? What’s our current access management process? What’s our device access management policy? What should it be? What security gaps do we need to fill, and in what order? With these answers, CISOs can begin to create an effective risk management framework using zero trust across the applications, networks, and end points. A well thought out zero trust plan allows security chiefs to analyze, provide critical data and advise senior business leaders on strategic decisions that affect organizational goals. While IT professionals and CISOs cannot control the physical environment, we can control the digital environment and be an enabler of secure business, versus being viewed as a blocker. Zero trust is the right way forward. Arvind Raman, CISO at Mitel, is a cybersecurity and zero trust expert who thinks so and can share guidance on what business leaders can do to implement the practice efficiently. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,051
2,022
"10 things CISOs need to know about zero trust | VentureBeat"
"https://venturebeat.com/2022/04/15/10-things-cisos-need-to-know-about-zero-trust"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 10 things CISOs need to know about zero trust Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Tech stacks that rely on trust make it easy for cyberattackers to breach enterprise networks. Perimeter-based approaches from the past that rely on trust first are proving to be an expensive enterprise liability. Basing networks on trust alone creates too many exploitable gaps by cyberattackers who are more adept at exploiting them. Worst of all, perimeter networks by design rely on interdomain trust relationships, exposing entire networks at once. What worked in the past for connecting employees and enabling collaboration outside the walls of any business isn’t secure enough to stand up to the more orchestrated, intricate attack strategies happening today. Eliminating trust from tech stacks needs to be a high priority Zero Trust Network Access (ZTNA) is designed to remove trust from tech stacks and alleviate the liabilities that can bring down enterprise networks. Over the last eighteen months, the exponential rise in cyberattacks shows that patching perimeter-based network security isn’t working. Cyberattackers can still access networks by exploiting unsecured endpoints, capturing and abusing privileged access credentials and capitalizing on systems that are months behind on security patches. In the first quarter of 2022 alone, there has been a 14% increase in breaches compared to Q1 2021. Cyberattacks compromised 92% of all data breaches in the first three months of 2022 , with phishing and ransomware remaining the top two root causes of data compromises. Reducing the risks of supporting fast-growing hybrid workforces globally while upgrading tech stacks to make them more resilient to attack and less dependent on trust are motivating CISOs to adopt ZTNA. In addition, securing remote, hybrid workforces, launching new digital-first business growth initiatives and enabling virtual partners & suppliers all drive ZTNA demand. As a result, Gartner is seeing a 60% year-over-year growth rate in ZTNA adoption. Their 2022 Market Guide for Zero Trust Network Access is noteworthy in providing insights into all CISOs need to know about zero trust security. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What CISOs need to know about zero trust Targeting the trust gaps in tech stacks with ZTNA is delivering results. There are ten areas that CISOs can focus on to make progress and start closing more gaps now, based on the insights gained from the Gartner market guide and research completed by VentureBeat: Clean up access privileges before starting IAM or PAM. Closing the trust gaps that jeopardize identities and privileged access credentials is often the priority organizations concentrate on first. It is common to find contractors, sales, service and support partners from years ago still having access to portals, internal sites and applications. Purging access privileges for expired accounts and partners is a must-do; it is the essence of closing trust gaps. Getting this done first ensures only the contractors, sales, service and support partners who need access to internal systems can get them. Today, locking down valid accounts with Multi-Factor Authentication (MFA) is table stakes. MFA needs to be active on all valid accounts from the first day. Zero trust needs to be at the core of System Development Lifecycles (SDLC) and APIs. Perimeter-based security dominates devops environments, leaving gaps cyberattackers continually attempt to exploit. API breaches, including those at Capital One , JustDial, T-Mobile and elsewhere continue to underscore how perimeter-based approaches to securing web applications aren’t working. When APIs and the SDLCs they support to rely on perimeter-based security, they often fail to stop attacks. APIs are becoming one of the fastest-growing threat vectors, given how quickly devops teams create them to support new digital growth initiatives. CIOs and CISOs need to have a plan to protect them using zero trust. A good place to start is to define API management and web application firewalls that secure APIs while protecting privileged access credentials and identity infrastructure data. CISOs also need to consider how their teams can identify the threats in hidden APIs and document API use levels and trends. Finally, there needs to be a strong focus on API security testing and a distributed enforcement model to protect APIs across the entire infrastructure. The business benefits of APIs are real, as programmers employ them for speedy development and integration. However, unsecured APIs present a keen application security challenge that cannot be ignored. Build a strong business case for ZTNA-based endpoint security. CISOs and their teams continue to be stretched too thin, supporting virtual workforces, transitioning workloads to the cloud and developing new applications. Adopting a ZTNA-based approach to endpoint security is helping to save the IT and security team’s time by securing IT infrastructure and operations-based systems and protecting customer and channel identities and data. CISOs who create a business case for adopting a ZTNA-based approach to endpoint security have the greatest chance of getting new funding. Ericom’s Zero Trust Market Dynamics Survey found that 80% of organizations plan to implement zero-trust security in less than 12 months, and 83% agree that zero trust is strategically necessary for their ongoing business. Cloud-based Endpoint Protection Platforms (EPP) provide a faster onramp for enterprises looking for endpoint data. Combining anonymized data from their customer base and using Tableau to create a cloud-based real-time dashboard, Absolute’s Remote Work and Distance Learning Center provides a broad benchmark of endpoint security health. The dashboard provides insights into device and data security, device health, device type and device usage and collaboration. Absolute is also the first to create a self-healing ZTNA client for Windows capable of automatically repairing or reinstalling itself if tampered with, accidentally removed or otherwise stopped working – ensuring it remains healthy and delivers full intended value. Cloud-based EPP and self-healing endpoint adoption continue growing. Self-healing endpoints deliver greater scale, security and speed to endpoint management – helping to offload overworked IT teams. A self-healing endpoint has self-diagnostics designed that can identify breach attempts and take immediate action to thwart them when combined with adaptive intelligence. Self-healing endpoints then shut themselves off, re-check all OS and application versioning, including patch updates, and reset themselves to an optimized, secure configuration. All these activities happen without human intervention. Absolute Software , Akamai , Blackberry, Cisco’s self-healing networks, Ivanti , Malwarebytes , McAfee, Microsoft 365 , Qualys , SentinelOne , Tanium , Trend Micro , Webroot and many others all claim their endpoints can autonomously self-heal themselves. Just one unprotected machine identity will compromise a network. Machine identities, including bots, IoT devices and robots, are the fastest proliferating threat surface in enterprises today, growing at twice the rate of human identities. It’s common for an organization not to have a handle on just how many machine identities exist across their networks as a result. It’s not surprising that 25% of security leaders say the number of identities they’re managing has increased by ten or more in the last year. Overloaded IT teams are still using spreadsheets to track digital certificates, and the majority don’t have an accurate inventory of their SSH keys. No single pane of glass can track machine identities, governance, user policies and endpoint health. Machine identities’ rapid growth is attracting R&D investment, however. Leaders who combine machine identities and governance include Delinea , Microsoft Security , Ivanti , SailPoint , Venafi , ZScaler and others. Ericom’s ZTEdge SASE Platform and their machine learning-based Automatic Policy Builder create and maintain user and machine-level policies today. Customer case studies on the Ericom site provide examples of how Policy Builder effectively automates repetitive tasks and delivers higher accuracy in policies. Getting governance right on machine identities as they are created can stop a potential breach from happening. Consider strengthening AWS’ IAM Module in multicloud environments. AWS’ IAM module centralizes identity roles, policies and Config Rules yet still doesn’t go far enough to protect more complex multicloud configurations. AWS provides excellent baseline support for Identity and Access Management at no charge as part of their AWS instances. CISOs and the enterprises they serve need to evaluate how the AWS IAM configurations enable zero trust security across all cloud instances. By taking a “never trust, always verify, enforce least privilege” strategy when it comes to their hybrid and multicloud strategies, organizations can alleviate costly breaches that harm the long-term operations of any business. Remote Browser Isolation (RBI) is table stakes for securing Internet access. One of the greatest advantages of RBI is that it does not disrupt an existing tech stack; it protects it. Therefore, CISOs that need to reduce the complexity and size of their web-facing attack surfaces can use RBI, as it was purpose-built for this task. It is designed to isolate every user’s internet activity from enterprise networks and systems. However, eliminating trusted relationships across an enterprise’s tech stack is a liability. RBI takes a zero-trust approach to browsing by assuming no web content is safe. The bottom line is that RBI is core to zero-trust security. The value RBI delivers to enterprises continues to attract mergers, acquisitions, and private equity investment. Examples include MacAfee acquiring Light Point Security, Cloudflare acquiring S23 Systems , Forcepoint acquiring Cyberinc and others in this year’s planning stages. Leaders in RBI include Broadcom, Forcepoint, Ericom, Iboss, Lookout, NetSkope, Palo Alto Networks, Zscaler, and others. Ericom is noteworthy for its approach to zero-trust RBI by preserving the native browser’s performance and user experience while hardening security and extending web and cloud application support. Have a ZTNA-based strategy to authenticate users on all mobile devices. Every business relies on its employees to get work done and drive revenue using the most pervasive yet porous device. Unfortunately, mobile devices are among the fastest-growing threat surfaces because cyber attackers learn new ways to capture privileged access credentials. Attaining a ZTNA strategy on mobile devices starts with visibility across all endpoint devices. Next, what’s needed is a Unified Endpoint Management (UEM) platform capable of delivering device management capabilities that can support location-agnostic requirements, including cloud-first OS delivery, peer-to-peer patch management and remote support. CISOs need to consider how a UEM platform can also improve the users’ experience while also factoring in how endpoint detection and response (EDR) fit into replacing VPNs. The Forrester Wave™: Unified Endpoint Management, Q4 2021 Report names Ivanti, Microsoft, and VMWare as market leaders, with Ivanti having the most fully integrated UEM, enterprise service management (ESM), and end-user experience management (EUEM) capability. Infrastructure monitoring is essential for building a zero-trust knowledge base. Real-time monitoring can provide insights into how network anomalies and potential breach attempts are attempted over time. They’re also invaluable for creating a knowledge base of how zero trust or ZTNA investments and initiatives deliver value. Log monitoring systems prove invaluable in identifying machine endpoint configuration and performance anomalies in real-time. AIOps effectively identifies anomalies and performance event correlations on the fly, contributing to greater business continuity. Leaders in this area include Absolute , DataDog , Redscan , LogicMonitor and others. Absolute’s recently introduced Absolute Insights for Network (formerly NetMotion Mobile IQ) represents what’s available in the current generation of monitoring platforms. It’s designed to monitor, investigate and remediate end-user performance issues quickly and at scale, even on networks that are not company-owned or managed. Additionally, CISOs can gain increased visibility into the effectiveness of Zero Trust Network Access (ZTNA) policy enforcement (e.g., policy-blocked hosts/websites, addresses/ports, and web reputation), allowing for immediate impact analysis and further fine-tuning of ZTNA policies to minimize phishing, smishing and malicious web destinations. Take the risk out of zero-trust secured multicloud configurations with better training. Gartner predicts this year that 50%t of enterprises will unknowingly and mistakenly expose some applications, network segments, storage, and APIs directly to the public, up from 25% in 2018. By 2023, nearly all (99%) of cloud security failures will be tracked back to manual controls not being set correctly. As the leading cause of hybrid cloud breaches today, CIOs and CISOs need to pay to have every member of their team certified who is working on these configurations. Automating configuration checking is a start, but CIOs and CISOs need to keep scanning and audit tools current while overseeing them for accuracy. Automated checkers aren’t strong at validating unprotected endpoints, for example, making continued learning, certifications and training needed. Identity and access management (IAM) needs to scale across supply chains and service networks. The cornerstone of a successful ZTNA strategy is getting IAM right. For a ZTNA strategy to succeed, it needs to be based on an approach to IAM that can quickly accommodate new human and machine identities being added across supplier and in-house networks. Standalone IAM solutions tend to be expensive, however. For CISOs just starting on zero trust, it’s a good idea to find a solution that has IAM integrated as a core part of its platform. Leading cybersecurity providers include Akamai , Fortinet , Ericom , Ivanti, and Palo Alto Networks. Ericom’s ZTEdge platform is noteworthy for combining ML-enabled identity and access management, ZTNA, micro-segmentation and secure web gateway (SWG) with remote browser isolation (RBI). The future success of ZTNA Pursuing a zero trust or ZTNA strategy is a business decision as a technology one. But, as Gartner’s 2022 Market Guide for Zero Trust Network Access illustrates, the most successful implementations begin with a strategy supported by a roadmap. How core concepts of zero trust removing any trust from a tech stack is foundational to any successful ZTNA strategy. The guide is noteworthy in its insights into the areas CISOs need to concentrate on to excel with their ZTNA strategies. Identities are the new security perimeter, and the Gartner guide provides prescriptive guidance on how to take that challenge on. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,052
2,021
"Report: 60% of orgs hit by ransomware-as-a-service attacks in the past 18 months | VentureBeat"
"https://venturebeat.com/2021/11/15/report-60-of-orgs-hit-by-ransomware-as-a-service-attacks-in-the-past-18-months"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 60% of orgs hit by ransomware-as-a-service attacks in the past 18 months Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to a new report from U.K.-based cybersecurity company Sophos , ransomware-as-a-service attacks became more popular in the past 18 months. Of the hundreds of ransomware attacks Sophos investigated during that time, nearly 60% were perpetrated by ransomware-as-a-service groups. Such attacks, where one group builds the malicious code and sells it to another group to use in the virtual breaking-and-entering of a vulnerable enterprise or organization, are growing increasingly sophisticated. Over the last two years, Sophos has observed a growing trend where malware developers lease their code to attackers to do the dirty work of breaking into an enterprise company’s network and holding its systems or data hostage until a ransom is paid. The Conti brand of ransomware-as-a-service, which the FBI said in May had attacked 16 medical and first responder networks , was the most popular type of ransomware deployed during that time. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The report notes that some malware developers even create their own attack playbooks and make them available to their affiliates. As a result, different attack groups end up implementing very similar attacks. The more that specialist ransomware programmers outsource their malicious code and infrastructure to third-party affiliates, the more the size and scope of ransomware delivery methods will grow. It is no longer enough for organizations to assume they’re safe by monitoring security tools and ensuring they’re detecting malicious code. IT teams need to understand the evolution of ransomware, and specifically the growing ransomware-as-a-service trend, in order to develop effective cybersecurity strategies for protecting their organizations in 2022 and beyond. Sophos compiled the data in the report from a statistical analysis of the hundreds of ransomware attacks and hundreds of thousands of malware samples its threat researchers and response teams investigated in the past 18 months. Read the full report by Sophos. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,053
2,022
"CrowdStrike unveils tools to fight threats to cloud-native applications | VentureBeat"
"https://venturebeat.com/2022/04/27/crowdstrike-unveils-adversary-focused-cnapp-capabilities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CrowdStrike unveils tools to fight threats to cloud-native applications Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. CrowdStrike has unveiled new capabilities for its adversary-focused cloud-native application protection platform (CNAPP). These new capabilities shorten the time it takes to respond to threats in cloud environments and workloads by accelerating threat hunting. CrowdStrike specializes in cloud-delivered endpoint protection, cloud workloads identity and data. CrowdStrike Security Cloud and world-class AI operate on the CrowdStrike Falcon platform. This platform employs real-time attack indicators, threat intelligence, developing adversary trade craft and enriched telemetry from across the enterprise, to enable hyper-accurate detections, automated protection and remediation, elite threat hunting and prioritized visibility of vulnerabilities. The Falcon platform, which is purpose-built in the cloud with a single lightweight-agent architecture, is designed to facilitate fast and flexible setup, enhanced security and efficiency, easy implementation and quicker time-to-value. Unveiled on the Falcon platform, the new adversary-focused CNAPP capabilities bring together two of CrowdStrike’s cloud solutions via a shared cloud activity dashboard. The popular agentless Falcon Horizon called Cloud Security Posture Management (CSPM) and the agent-based Falcon Cloud Workload Protection (CWP) modules. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Created to assist security and devops teams in prioritizing the most critical cloud security issues, addressing runtime threats and enabling cloud threat hunting, the updates also include new methods of leveraging Falcon Fusion (CrowdStrike’s SOAR framework) to automate remediation for Amazon Web Services (AWS); new custom Indicators of Misconfigurations (IOMs) for AWS, Google Cloud Platform (GCP) and Microsoft Azure; new methods to prevent identity-based threats; and more. Organizations that use multicloud environments and hybrid work models have disintegrated traditional work boundaries. Developers spin clouds up and down in minutes without noticing any potential misalignment. Similarly, public cloud instances are made available for immediate use without the use of MFA (multifactor authentication) or other security procedures. An attacker can exploit a security flaw in less than a second and launch a fast-moving lateral breach. To secure their cloud infrastructures before a threat actor finds a way in, companies must think like attackers. Recently named a Strong Performer in The Forrester Wave , CrowdStrike is addressing this need with the adversary-focused approach to CNAPP, which is powered by industry-leading threat intelligence. “CrowdStrike is distinct from other vendors in the market because we offer both agent-based and agentless solutions, giving organizations complete visibility, detection and remediation capabilities to safeguard their cloud infrastructure,” said Amol Kulkarni, the chief product and engineering officer at CrowdStrike. According to Kulkarni, CrowdStrike also offers breach protection for cloud workloads, containers and Kubernetes. The company does this for enterprises with multicloud and hybrid cloud infrastructures, giving them real-time alerting and reporting on over 150 cloud threats. CrowdStrike’s adversary-focused approach to CNAPP, which is backed by industry-leading threat intelligence, guarantees that enterprises are well-prepared to defend against cloud breaches. Dave Worthington, general manager of digital security and risk at Jemena, affirmed that CrowdStrike’s CNAPP provides a deep and accurate view of the cloud threat landscape. This, he said, has set CrowdStrike apart from the competition. “CrowdStrike’s cloud security services, such as Falcon Horizon, which we use to monitor our cloud environment and detect misconfigurations, vulnerabilities and security threats, are continually evolving and improving, which is one of the biggest benefits I’ve seen,” Worthington said. Jason Waits, director of cybersecurity at Inductive Automation, similarly believes that the Falcon platform’s expansion to enable CNAPP can deliver full cloud security with threat hunting capabilities that no other vendor can replicate. “CrowdStrike’s performance amazes us due to its minimal CPU usage and relatively low impact on overall system performance. We’re able to reduce security blindspots with Falcon Horizon by continuously monitoring our cloud environment for misconfigurations,” Waits said. CrowdStrike’s adversary-focused CNAPP capabilities Cloud activity dashboard: Combines Falcon Horizon’s CSPM insights with Falcon CWP’s workload protection into a single user interface. This allows for speedier assessment and intervention by prioritizing critical issues, addressing runtime risks and enabling cloud threat hunting. Custom indicators of misconfigurations (IOMs) for AWS, Azure and GCP: Guarantees that security is a component of every cloud deployment, with tailored policies that correspond with organizational goals. Identity access analyzer for Azure: Defends against identity-based threats. It also guarantees that permissions are enforced based on the least privilege for Azure Active Directory (AD) groups’ users and apps. Falcon Horizon’s existing Identity Access Analyzer for AWS functionality has been extended with this capability. Automated remediation workflow for AWS: Responds to threats with guided and automated remediation powered by Falcon Fusion. Workflows provide context and prescriptive direction for resolving issues and reducing incident resolution time. Falcon container detection: Defends against malware and sophisticated threats targeting containers automatically with machine learning (ML), artificial intelligence (AI), indicators of attack (IoAs), deep kernel visibility and custom indicators of compromise(IoCs) as well as behavioral blocking. Rogue container detection: Keeps track of container deployments and decommissions. It detects and scans malicious images and also identifies and prevents privileged or writable containers from being created – which can be used as entry points for attacks. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,054
2,021
"No-code development platform Webflow raises $140 million at a $2.1 billion valuation | VentureBeat"
"https://venturebeat.com/2021/01/13/no-code-development-platform-webflow-raises-140-million-at-2-1-billion-valuation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages No-code development platform Webflow raises $140 million at a $2.1 billion valuation Share on Facebook Share on X Share on LinkedIn Webflow on laptop Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Webflow , a cloud-based no-code website development and hosting platform used by major enterprises such as Allianz, Rakuten, Zendesk, and Dell, has raised $140 million in a series B round of funding co-led by Accel and Silversmith. Alphabet’s CapitalG growth fund also participated in the round as a new investor, with Webflow now valued at $2.1 billion. For context, Webflow was reportedly valued at no more than $400 million at its $72 million series A round 17 months ago. This highlights a rapid year of growth for the San Francisco-based startup as it doubled its paying customers to 100,000 and launched a new enterprise product replete with single sign-on support, security audit compliance, AWS-powered advanced DDoS protection, uptime monitoring, and more. Going low-code The web has been awash in website-building tools designed for non-coders since its earliest days, and the tools have become increasingly sophisticated, creating a reported $6.5 billion market in the process. Nasdaq-listed website builder Wix is currently flying high, with its shares doubling over the past year to make it a $15 billion company, while well-funded Squarespace is also currently gearing up to go public. But Webflow has set out to separate itself from the pack with a focus on professional websites that can be completely built from a visual canvas. “We’re in an entirely different professional league,” Webflow cofounder and CEO Vlad Magdalin told VentureBeat. “Webflow doesn’t rely on templates, but rather allows any kind of professional website to be designed from the ground up.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Weblow customization Of course, there’s also the omnipresent WordPress , and it’s easy enough to carve out a half-respectable website using its array of themes and templates. But any customization usually requires some form of coding, be that in HTML, PHP, CSS, or JavaScript. “Our main unique selling point is the visual development environment to make down-to-the-pixel customizations, without code,” Magdalin added. He argues that the company’s closest competitor is actually hand-coding. “Usually, when companies or freelance service professionals switch to Webflow, they are not switching from a product, but rather from having to rely on a developer to manually translate design files to code,” Magdalin said. Founded in 2012, Webflow was originally focused more on prototyping or simpler low-functionality websites, but today it’s used by businesses of all sizes across the spectrum. Dell, for example, uses Webflow for internal websites that host content such as style guides, while Dropbox’s e-signature division HelloSign uses Webflow to design, develop, and host all of its marketing pages. Many others continue to use Webflow for rapid prototyping or to launch smaller microsites or landing pages. Above: Dropbox-owned HelloSign uses Webflow for its marketing pages At its core, Webflow helps circumvent the time-consuming steps typically involved in getting a website from the design stage to the web. Development teams can also use Webflow’s APIs to hook the site up to various external tools that are integral to modern websites, such as Stripe, Airtable, HubSpot, Mailchimp, Zapier, and Segment. But why would a deep-pocketed enterprise use Webflow instead of investing in its own development team? According to Magdalin, the shift mirrors changes across the tech landscape over the past decades and comes down to efficiency. “Enterprises used to build datacenters for storing data, but you now see most forward-thinking technical teams using cloud infrastructure — not because they can’t build datacenters, but because it’s far more efficient to rely on cloud providers,” he said. “We believe a similar path will happen for manual coding in areas where visual tools can do the job just as well. Design and development teams will pick the platform that lets them build the best web experiences in the fastest and most sustainable way — for the same reason that most enterprises now use tools like spreadsheets, which are arguably under the ‘no-code’ umbrella, for workloads that decades ago were solved with big development teams.” Changes Webflow had previously raised nearly $75 million, and with another $140 million in the bank it’s well-financed to double down on product development. It will also be able to expand deeper into the enterprise space, positioning itself alongside a growing array of tools from tech giants. In the last year alone, Amazon’s AWS launched a no-code app development platform called Honeycode , while Google acquired enterprise-focused AppSheet. This “citizen developer” trend has been unfolding for some time, with Gartner noting in a 2019 report called The Future of Apps Must Include Citizen Development that by 2023, citizen developers within large enterprises will outnumber professional developers by at least 4 times. But there is no escaping the fact that Webflow and its ilk have real limitations when it comes to building more full-featured web applications without using third-party integrations and/or engineers. Webflow will use part of its funding to address this gap, as it looks to move beyond website development and into “more powerful web applications.” There are currently plenty of third-party no-code tools designers can use to extend their Webflow-powered websites’ functionality, such as MemberStack for creating logged-in experiences and enabling payments; Jetboost for enabling real-time search and filtering; and Weglot for making websites multilingual. But this also highlights how far Webflow has to go to become a truly self-contained entity for web apps. “Right now, if you want to build truly interactive applications or membership sites with a logged-in experience, you need other tools in addition to Webflow,” Magdalin said. “However, these no-code tools are still in the early innings, and the vast array of products and services we all use today still need engineering teams to implement. There is a good amount of complexity that can be achieved with our integrations, but there’s definitely a long road still to go to solve more complex problems. This funding round will help us invest more into our product development — and our ecosystem — as we look to expand the use cases we can serve.” Magdalin said the company’s next focus will be to enable Webflow users to build a logged-in experience natively, a feature it plans to roll out later this year. This will open the doors to a wider array of website types, spanning paid membership services, subscriptions, and even enterprise intranets. Additionally, Magdalin said the company plans to introduce some machine learning smarts to help designers build their sites. “Since we now have millions of well-structured sites created on Webflow, it’s possible to create models that aid creators in making design decisions that can lead to higher converting landing pages and things of that nature,” he said. What’s clear in all of this is that the no-code/low-code development market is gaining steam. A recent market research report pegs the global low-code development platform market at more than $10 billion and suggests it could rise to $187 billion by 2030. One factor influencing this growth is that every company is now effectively a software company (to varying degrees). At the same time, there’s a software engineering talent shortage , but opening up the web development process to less technically skilled people will go some way toward addressing that gap. “There are so many jobs, and so much opportunity in web-based business,” Magdalin said. “People cannot learn to code fast enough, and there’s much more demand for software-enabled solutions than there are new software engineers being trained, which has led to a boom in these no-code development tools.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,055
2,021
"IT skills gap is forcing leaders to prioritize cloud and security hires | VentureBeat"
"https://venturebeat.com/2021/11/17/it-skills-gap-is-forcing-leaders-to-prioritize-cloud-and-security-hires"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IT skills gap is forcing leaders to prioritize cloud and security hires Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Seventy-six percent of IT decision-makers worldwide face critical skills gaps in their departments, increasing 145% since 2016, according to Skillsoft’s Global Knowledge 2021 IT Skills and Salary Report. At the same time, 50% of IT departments say cybersecurity is their most important area of investment, followed by cloud computing, governance, and compliance. Thus, in a nutshell, Skillsoft quantifies the paradox facing all IT departments in 2022. Skillsoft’s report examines the growing global IT skills gap by region, with additional insights into the most in-demand IT skills, current salaries and compensation, and organizations’ plans for training, leadership development, and certifications. Based on 9,300 responses from IT professionals in North America; Latin America; Europe, the Middle East, Africa (EMEA); and Asia-Pacific, the report provides a globally based, realistic assessment of the growing IT skills gap. Pressure to innovate widens skills gap Digitally transforming customer experiences forces IT to innovate quickly yet more disruptively. Securing each new application and platform requires new cybersecurity techniques and technologies most IT departments don’t have staff to support today. Many in IT are asked to do two or more jobs at once as a result. Fifty-five percent of IT decision-makers say the skills gap’s greatest impact on enterprises today is increased stress on employees. However, overworking IT teams isn’t the solution, even in the short term, as enterprises say they have difficulty meeting quality objectives (42%), decreased ability to meet business objectives (36%), and often see project durations increase (35%). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Facing time and resource constraints to get new IT initiatives launched and secured, it’s understandable that just over half (54%) of IT decision-makers say they’ve been unsuccessful in filling at least one position, and 38% have three or more positions needed to fill. Furthermore, IDC predicts that by 2022, the monetary losses resulting from IT skills gaps will be $775 billion worldwide. It’s the pace of technological change and the intense pressure on IT teams to innovate that most contribute to the widening skills gap. Just over a third (38%) of IT decision-makers say that the rate of technological change is quickly outpacing their existing skills development program as the primary driver, followed by difficulties attracting qualified candidates (35%) and a lack of investment in training resources (32%). In addition, 25% of enterprises say they cannot afford to pay the salaries that experienced, in-demand IT professionals want. IT is under the greatest pressure to innovate in cybersecurity. Fifty percent of enterprises are actively investing in new cybersecurity systems and platforms, according to IT leaders surveyed. And for the sixth straight year, cybersecurity professionals are the most sought after globally. Yet cloud computing is the second most difficult hiring area, according to 28% of managers worldwide. In addition, cloud adoption rates are outpacing training, so IT decision-makers struggle to find the right individuals to keep up with evolving technology needs. Above: Finding IT professionals with expertise in cybersecurity, cloud computing, or analytics and big data is the most challenging recruiting goal for any IT decision-maker today; 42% of hiring managers are having trouble filling cybersecurity positions today. Salary gains are strong for IT pros IT professionals with cloud computing, cybersecurity, analytics and big data, and AI and machine learning skills are the most in-demand today and regularly receive invitations to interview for new jobs. According to discussions VentureBeat has had with IT directors and CIOs currently hiring, it’s common for experienced IT professionals in these fields to be offered 25%-30% or more over their base salary, a signing bonus, and stock options. The study found that IT professionals in North America with cloud computing expertise earn an average salary of $144,533 today, and cybersecurity experts are paid $132,163. North American IT professionals are earning an average base salary of $121,544 this year. IT decision-makers in North America, on average, earn $28,770 more than IT staff. North America is the highest paying geographic region by a wide margin for IT professionals today. Above: The average North American IT professional salary of $121,544 is 58% higher than the global average salary of $76,703. North American IT decision-makers’ average salary of $138,719 is 66% higher than global IT decision-makers’ average salary of $83,497. The five highest-paying tech jobs have an average salary of $133,802, $12,258 more than the average IT professional’s salary in North America today. The highest-paying IT job in 2021, excluding C-level and VP/director level executives and IT sales and marketing, is cloud computing, paying an average salary of $144,533. The following table compares average sales by job function across each region of the study. Above: Cloud, risk management, IT architecture and design, cybersecurity/IT security, and audit/IT compliance are the five highest-paying tech jobs in North America this year. Personal development helps retention Investing in employees’ professional development, including paying for certifications, proves to be one of the most effective retention strategies. In addition to respondents saying they are more engaged and efficient at work based on the knowledge they gained from certifications, employers see it as an excellent deterrent to having valuable employees leave. The study finds that employees are quick to leave employers not offering this benefit, with 59% choosing to leave an employer due to a lack of growth and development opportunities. Over a third (39%) say training and development are more important than compensation (39%) and work/life balance (31%). More surprising is the finding that 37% of IT decision-makers say their organizations don’t provide formal training to keep their skills up to date. Given how competitive today’s hiring climate is, that’s a decision that needs to be reconsidered. IT decision-makers are quick to attribute financially measurable value gains to employees with certifications. Sixty-four percent of IT decision-makers say certified employees deliver $10,000 or more in additional value than non-certified employees. Time and money invested in certification training positively affect enterprises’ bottom lines, even though 10% of IT staff report that management does not see a benefit to training or does not approve it. Fifty percent of enterprise IT decision-makers are seeing between a $10,000 to $29,999 annual economic benefit from certified employees, making certifications one of the best investments in employee development there is. Ninety-two percent of all respondents said they have at least one certification, a 5% and 7% increase compared to 2020 and 2019, respectively. Above: The overwhelming majority of enterprise IT decision-makers see a positive economic benefit from hiring and developing certified employees internally. It’s proving to be an effective strategy for closing the skills gap. More than money talks to IT pros Attracting and retaining IT professionals is more about creating a strong culture of continual professional development that includes paying for certifications than it is about salaries alone. Professional development programs make the best IT professionals more motivated to gain greater mastery of their fields. IT decision-makers who see this and double down on certification and professional development training set the foundation for a more stable, engaged workforce that will churn less than competitors relying on cash incentives alone. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,056
2,022
"No-code development platform Webflow eyes the enterprise with $120M fundraise | VentureBeat"
"https://venturebeat.com/2022/03/16/no-code-development-platform-webflow-eyes-the-enterprise-with-120m-fundraise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages No-code development platform Webflow eyes the enterprise with $120M fundraise Share on Facebook Share on X Share on LinkedIn Webflow on laptop Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Webflow has raised $120 million in a series C round of funding, as the no-code web development platform eyes a larger piece of the pie in the enterprise sector. Founded in 2012, Webflow in its original guise was focused more on low-functionality websites, but today it’s used by larger businesses such as Dell, Zendesk, and Rakuten to remove much of the manual spadework from the development process. Indeed, much in the same way as businesses have transitioned to cloud infrastructure to alleviate themselves of the burden that comes with managing their data centers, companies are increasingly seeking new ways to make other aspects of their operations more efficient — and that includes democratizing the development process through visual tools. “By dramatically simplifying the creation of websites, Webflow has enabled hundreds of thousands of individuals and businesses to build complex, beautifully designed websites,” explained Ali Rowghani, managing director at Y Combinator’s Continuity fund, which led Webflow’s latest round. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Enterprising Webflow also revealed today that its enterprise product , which includes extra features such as advanced security, custom traffic scaling, and guaranteed uptime, has grown by 600% in the past year, and is now used by companies such as PwC, Univision, and Discord. With another $120 million in the bank, Webflow said that it plans to traverse further down the enterprise path, as well invest “in its core community.” “This funding will help accelerate our work towards our mission to empower millions of people to create for the web,” Webflow cofounder and CEO Vlad Magdalin said in a statement. A quick peek across the software development landscape reveals that the no code / low-code movement is showing little sign of slowing. There are now a wealth of platforms designed to help anyone in a company build functional websites or robust internal business apps — startups such as Stacker , Budibase , and Softr show that, as do more established players such as Appian , Google’s AppSheet , and Amazon’s Honeycode. Prior to now, Webflow had raised around $215 million, and with its latest cash injection the San Francisco-based company’s valuation has soared to $4 billion — almost double its valuation on last year. Other notable backers in Webflow’s series C round include Alphabet’s CapitalG, Accel, Draper Associates, and Silversmith. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,057
2,022
"4 ways personalization can help brands reconnect with consumers | VentureBeat"
"https://venturebeat.com/2022/03/11/4-ways-personalization-can-help-brands-reconnect-with-consumers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community 4 ways personalization can help brands reconnect with consumers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Diane Keng, CEO of Breinify As we enter the third year of the pandemic, brands are looking to help consumers reconnect. With people now spending so much time online, the most effective way for brands to do this is by creating meaningful and relevant digital experiences with personalization that span the consumer journey. The importance of optimizing the consumer journey It’s no longer enough to personalize digital experiences by demographics and broad segments. Consumers want to be catered to as individuals, which means brands must deliver personalized experiences that are relevant at every stage of their journey. With the increase in time spent across digital platforms, the consumer journey is now more complex and takes place across multiple channels. There’s no longer a linear funnel — a clear path from point A to point B, like in a physical store. The journey has splintered into numerous touchpoints across multiple digital channels, where decisions are made and preferences formed in real-time. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google’s content marketing team discovered that consumers experience about 150 of these “micro moments” per day , including purchase moments, research moments, and discovery moments, among others. In these moments, brands have mere seconds to capture and keep consumers’ attention. It’s no longer enough to personalize digital experiences by demographics and broad segments. Personalizing each intent-rich moment can help brands gain a competitive advantage. These are opportunities to earn a consumer’s loyalty or turn them off completely. Here’s an example of this level of personalization in action: Diane usually buys wine online from her favorite online beverage retailer on Thursdays to prepare for the weekends. However, before a long weekend, she buys beer in addition to wine because she often hosts dinner parties and enjoys spending time with friends. With effective personalization, before a regular weekend, Diane will see product recommendations for wine, but before a long weekend, she will see product recommendations for both beer and wine, perhaps even with discount pricing on bundles of both. This ensures that Diane has a good experience and finds exactly what she’s looking for at the right moment. How brands can create micro moments with personalization 1. Set clear business objectives on what personalization will accomplish both in the short- and long-term. Long-term objectives, like increasing sales, are important but sometimes difficult to measure or take a while to see results. Short-term objectives, on the other hand, can be used to measure progress and results relatively quickly. For example, it would be easier to gauge the results if the short-term objective was to improve cart-to-checkout conversions on ecommerce channels in a quarter. 2. Identify one or two data-driven use cases that could help achieve these goals on a specific marketing channel. Using the above example, there are a couple of ways to improve the cart to check out conversions on e-commerce channels. The most basic use-case for personalization for brands is to provide personalized product recommendations based on purchase history and changing preferences to increase the likelihood of a conversion. Alternatively, if things are sitting in a consumer’s cart for a week, nudge them to check out by offering a 20 percent discount that expires in two days. 3. Use AI-driven predictive personalization technology, which can deliver meaningful experiences and also be used to measure results. AI-driven solutions that are plug-and-play enable marketing teams to be more data-driven because they can identify patterns and discover opportunities for personalization that humans can’t. Predictive personalization for brands allows marketers to see results quickly and deliver personalized experiences at scale without manual-heavy processes like analyzing tons of data and then setting up segments and personalization rules. Aside from quick results, these tools also improve marketing ROI. 4. Pick tools and vendors that align with your business goals and offer solutions that are scalable. Technology should make your team’s day-to-day easier, not more difficult. Choose tools and vendors that understand your business, offer easy integration with existing marketing technology, and that don’t require too much manual configuration or too many engineering resources. Most tools take a minimum of six months to integrate, but it’s important to find tools that can integrate within weeks to get results faster. Brands can improve their connections with consumers by understanding their journey and optimizing it in meaningful ways — by creating relevant experiences at the right time through predictive personalization. This involves understanding consumers’ changing preferences, external factors, and additional context that might impact their behavior to provide them exactly what they are looking for at intent-rich moments. It’s a win-win because consumers find what they need at the appropriate time and have a good experience, and brands can meet their business objectives and improve marketing ROI. Diane Keng is the CEO and cofounder of Breinify DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,058
2,021
"Data breach extortion scheme uncovered by NCC Group | VentureBeat"
"https://venturebeat.com/2021/10/18/data-breach-extortion-scheme-uncovered-by-ncc-group"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data breach extortion scheme uncovered by NCC Group Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Over the past few months NCC Group has observed an increasing number of data breach extortion cases, where the attacker steals data and threatens to publish said data online if the victim decides not to pay. Given the current threat landscape , most notable is the absence of ransomware or any technical attempt at disrupting the victim’s operations. Within the data breach extortion investigations, NCC Group has identified a cluster of activities defining a relatively constant modus operandi described in this article. NCC Group tracks this adversary as SnapMC and has not yet been able to link it to any known threat actors. The name SnapMC is derived from the actor’s rapid attacks, generally completed in under 30 minutes, and the exfiltration tool mc.exe it uses. Extortion emails threatening their recipients have become a trend over time. The lion’s share of these consist of empty threats sent by perpetrators hoping to profit easily without investing in an actual attack. SnapMC, however, has shown itself capable of actual data breach attacks. The extortion emails NCC Group has seen from SnapMC give victims 24 hours to get in contact and 72 hours to negotiate. Even so, NCC Group has seen this actor start increasing the pressure well before countdown hits zero. SnapMC includes a list of the stolen data as evidence that they have had access to the victim’s infrastructure. If the organization does not respond or negotiate within the given timeframe, the actor threatens to (or immediately does) publish the stolen data and informs the victim’s customers and various media outlets. Modus operandi At the time of writing, NCC Group’s Security Operations Centers (SOCs) have seen SnapMC scanning for multiple vulnerabilities in both webserver applications and VPN solutions. NCC Group has observed this actor successfully exploiting and stealing data from servers that were vulnerable to remote code execution in Telerik UI for ASPX.NET, as well as SQL injections. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! After successfully exploiting a webserver application, the actor executes a payload to gain remote access through a reverse shell. Based on the observed payloads and characteristics, the actor appears to use a publicly available Proof-of-Concept Telerik Exploit. Directly afterwards, PowerShell is started to perform some standard reconnaissance activity. Observed commands include: whoami; whoami /priv; wmic logicaldisk get caption,description,providername; and net users /priv. Note that in the last command the adversary used the /priv option, which is not a valid option for the net users command. In most of the cases, NCC Group analyzed that the threat actor did not perform privilege escalation. However, in one case, it did observe SnapMC trying to escalate privileges by running a handful of PowerShell scripts: Invoke-Nightmare; Invoke-JuicyPotato; Invoke-ServiceAbuse; Invoke-EventVwrBypass; and Invoke-PrivescAudit. NCC Group observed the actor preparing for exfiltration by retrieving various tools to support data collection, such as 7zip and Invoke-SQLcmd scripts. Those, and artifacts related to the execution or usage of these tools, were stored in the following folders: C:\Windows\Temp\; C:\Windows\Temp\Azure; and C:\Windows\Temp\Vmware. SnapMC used the Invoke-SQLcmd PowerShell script to communicate with the SQL database and export data. The actor stored the exported data locally in CSV files and compressed those files with the 7zip archive utility. The actor used the MinIO client to exfiltrate the data. Using the PowerShell commandline, the actor configured the exfil location and key to use, which were stored in a config.json file. During the exfiltration, MinIO creates a temporary file in the working directory with the file extension […].par.minio. Mitigations First, initial access was generally achieved through known vulnerabilities, for which patches exist. Patching in a timely manner and keeping (internet connected) devices up-to-date is the most effective way to prevent falling victim to these types of attacks. Make sure to identify where vulnerable software resides within your network by (regularly performing) vulnerability scanning. Furthermore, third parties supplying software packages can make use of the vulnerable software as a component as well, leaving the vulnerability outside of your direct reach. Therefore, it is important to have an unambiguous mutual understanding and clearly defined agreements between your organization and software suppliers about patch management and retention policies. The latter also applies to a possible obligation to have your supplier provide you with systems for forensic and root cause analysis in case of an incident. It’s worth mentioning that, when reference-testing the exploitability of specific versions of Telerik, it became clear that when the software component resided behind a well-configured Web Application Firewall (WAF), the exploit would be unsuccessful. Finally, having properly implemented detection and incident response mechanisms and processes seriously increases the chance of successfully mitigating severe impact on your organization. Timely detection and efficient response will reduce the damage even before it materializes. Conclusion NCC Group’s Threat Intelligence team predicts that data breach extortion attacks will increase over time, as it takes less time and technical in-depth knowledge or skill in comparison to a full-blown ransomware attack. In a ransomware attack, the adversary needs to achieve persistence and become domain administrator before stealing data and deploying ransomware. In the data breach extortion attacks, most of the activity could be automated and takes less time while still having a significant impact. Therefore, making sure you are able to detect such attacks, in combination with having an incident response plan ready to execute at short notice, is vital to efficiently and effectively mitigate the threat SnapMC poses to your organization. NCC Group’s RIFT: Research and Intelligence Fusion Team (RIFT) leverages its strategic analysis, data science, and threat hunting capabilities to create actionable threat intelligence, ranging from IoCs and detection capabilities to strategic reports on tomorrow’s threat landscape. Cybersecurity is an arms race where both attackers and defenders continually update and improve their tools and ways of working. To ensure that its managed services remain effective against the latest threats, NCC Group operates a Global Fusion Center with Fox-IT at its core. This multidisciplinary team converts cyberthreat intelligence into powerful detection strategies. This story originally appeared on Research.nccgroup.com. Copyright 2021 VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,059
2,022
"3 tips to create a successful work-from-anywhere strategy | VentureBeat"
"https://venturebeat.com/2022/03/08/3-tips-to-create-a-successful-work-from-anywhere-strategy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 3 tips to create a successful work-from-anywhere strategy Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Yoni Avital, cofounder and COO at ControlUp. Over the past two years, companies around the globe have continually changed the way they conduct business to meet new needs that will, ultimately, find 60% of us working remotely by 2024. For CEOs, Chief Resource Officers (CHROs), and hiring managers, recruiting and retaining an engaged (remote) workforce has become their primary test. To do that successfully, IT must focus on improving the digital employee experience (DEX) to give those companies a clear hiring advantage , given there are so many benefits to have a productive WFA workforce. At its core, digital employee experience ( DEX ) focuses on the ability to identify and eliminate any digital friction — such as poor Wi-Fi signals , hardware failure, or application performance — that impacts employee satisfaction. DEX tools help IT speed onboarding, improve patching velocity, and resolve issues quickly through intelligence-driven experience automation via machine learning. IT leaders will also receive the continuous insights they need to become the leaders in their sector: accelerating time to value of investments and improving employee experience as a full partner to the business. Though many companies have increased their investment in the digital workplace, Gartner found that the employee experience with technology remains a “black box” for most infrastructure and operations (I&O) leaders. In the future, IT will need to optimize remote environments — while at the same time preventing user downtime and resolving issues faster — for groups that are sometimes in, and mainly out of, the office. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Though Gartner calls the distributed enterprise one of the top 12 strategic technology trends this year, only one in ten organizations have begun communicating and piloting their vision for a hybrid work model. Those who enable it fully will achieve 25% faster revenue growth than peer companies who do not. Here are three major considerations for a well-rounded DEX strategy. Companies that move quickly and address each area will experience a significant business advantage. 1. In-person vs. remote work: balancing collaboration and productivity Almost 75 % of Fortune 500 CEOs say they will need less office space in the future, and according to a forward-looking survey of CPAs , one in five corporate executives say they plan to reduce office space in the coming year. But, if having a cool office no longer matters to employees, what does? The flexibility and the freedom to work from anywhere. This has led to a rise in remote worker productivity since the beginning of the pandemic, with 34% of employees feeling more productive now than before the pandemic (at only 28%). The same research also shows that more than half of U.S. executives (52%) said average employee productivity has improved in the same period. Many would also argue that teams work better when they collaborate in person, so offices will become those dedicated spaces. They’ll also require IT teams to integrate remote workers seamlessly, as well as adjust their strategies to manage both in-person and remote environments successfully. 2. Work-from-anywhere (WFA) is the new perk in the war for talent IT departments responded heroically when they were tasked with supporting a workforce that was suddenly and completely remote. For a while, it worked. However, being away from the office meant workers were dependent on local access. Tools that did the job perfectly when everyone was in the same office stopped working and employee engagement began to fall. A survey by noted economists found that 39% of college graduates who quit their jobs during the Great Resignation of 2021 said they did so to increase their ability to work from home. And Harvard Business Review found that 50% of workers say they won’t stay at jobs that don’t offer remote work—making it a key recruiting and retention tool. Remote work has been normalized and it’s essential to provide workers with the right tools to communicate, collaborate, and serve customers. 3. Hybrid and work-from-anywhere workplaces will continue to put DEX to the test Thanks to consumer tech, people expect the digital tools they use to simply work. In turn, employees now expect and demand a frictionless digital experience, whether working from home or at the office. The message is clear. It’s essential that companies put DEX front and center to ensure these remote work experiences are simple and reliable, while, ultimately, increasing employee productivity and overall satisfaction. Happy workers are more productive workers By 2025, 50% of IT organizations will have a digital employee experience strategy, team, and management tool in place, a major increase from 5% in 2021. IT is already beset by supply chain issues, figuring out the future of work, and fending off cyberattacks. How will a new focus on digital employee experience be a net positive? Eventually, all companies will be digital-first, so now is the time to develop an employee-centric approach to the distributed enterprise. Our new work-from-anywhere culture has reminded us that employee satisfaction and happiness are crucial when it comes to productivity and overall growth. Organizations need to set employees up for success by minimizing and streamlining those time-consuming bottlenecks and administrative activities. Getting your company technology infrastructure updated and implemented in a way that works for the new normal is a key to this process. There are only so many hours in the day — make them count. Yoni Avital is the cofounder and COO of ControlUp. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,060
2,022
"The Great Resignation: Mitigating the risk of data loss | VentureBeat"
"https://venturebeat.com/2022/04/07/the-great-resignation-mitigating-the-risk-of-data-loss"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The Great Resignation: Mitigating the risk of data loss Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The world of work is radically changing. People have embraced remote and hybrid work , job vacancies and salaries are approaching all-time highs, and resignations are at record levels. Navigating this volatility can be a challenge for businesses of any size. We all know that high staff turnover can negatively affect the continuity of service to customers and slow down business operations. However, staff departures increasingly carry with them an even greater danger – data loss. Data loss can mean physically losing vital information: for example, if it has been improperly stored on a departed staff member’s personal device; losing knowledge on how to access, collect or use data; or losing awareness that the data actually exists. It can cover everything from passwords to customer information and marketing databases, through to source codes, developer documentation and other critical pieces of business intelligence. The portability and abundance of data coupled with the decentralization of teams via remote working mean that the risk of catastrophic data loss grows with each team member that leaves. Damage may not just be limited to a loss of information; there’s scope for both business functionality and brand reputation to take a serious hit. Not to mention, with data protection laws strengthening across the world, the very real risk of legal trouble. Protecting your business against these problems really comes down to a business reassessing its relationship with data. The first step is recognizing that vulnerabilities do exist. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Legacy architecture and data management platforms Many readers may be shocked to hear that many of the world’s largest companies use extraordinarily outdated or rudimentary data management solutions. We have often encountered organizations that use little more than an Excel spreadsheet to collect and manage some of their most sensitive data. These issues aren’t confined to big corporations; a significant proportion of startups also put data management low on the agenda. It is usually seen as something to sort out later — “when we’re a bit bigger.” However, the reality is that your data management infrastructure is the rock on which your business is built and loss is a very real risk. Complex analysis and information sharing are also underpinned by having a flexible, robust and open data architecture. It needs to continue to evolve in line with the scale and needs of your company and the advancements of data science techniques. Taking a piecemeal or “one and done” approach to planning and investing in data infrastructure is where businesses become unstuck. It will ultimately be the main reason your business can be exposed to so much risk if and when staff members leave. As such, it is critical to continually audit your data infrastructure. The priority should be ensuring that the management and upkeep of your infrastructure is a responsibility shared across the company. This means having robust procedures covering how information is documented, cleaned, protected and analyzed. By sharing responsibilities across departments and adopting an ‘always in review’ mentality, you can quickly identify where there are gaps in your infrastructure or where systems or policies need to be reviewed. After all, what may work for your development team may be completely inappropriate for your marketing team. This brings me neatly to a big problem to avoid: silos. Data silos create single points of failure Information flow can be one of the most profound issues a company — big or small — can face. Insights, data and actions trapped in the brains of laptops or systems of one department can be invaluable if shared with another, but a lot of different factors can get in the way: culture, policies, tech infrastructure and a lack of skills or education (more on that later). The Great Resignation magnifies the current problems because siloed data and insights are more acutely at risk of loss. All it can take is one key team member leaving and failing to accurately document or share crucial information. An immediate way to tackle this problem is tasking your managers, IT and HR departments to work together to create a “data exit interview.” To do this you will need to first: Get existing team members to self-audit all the information, access, documentation and other insight that they are aware of, have access to or are in charge of. Combine this information with a company-wide audit conducted by your IT team to identify gaps either where responsibility of information is unknown or information exists that was previously undocumented. Task your HR and relevant managers with the creation of bespoke questionnaires for each team member to cover basic questions to be answered ahead of their departure on the location, status and access to the data they may have been responsible for or have stored on their devices. This questionnaire should be conducted well ahead of a staff member’s final day at the company to allow for any unforeseen complications to be tackled and for your IT team to take an incremental approach to transferring controls and access. On that note, it is vitally important that access to all data and systems that carry company data are fully removed. It’s worth remembering that no data exit interview will be completely foolproof simply because there are instances where people don’t know what they know, or, at least, may not consider it worth sharing. Think about it in the same way you would seek to preserve institutional knowledge — questions need to be probing and investigatory, and all the information needs to be captured and documented in such a way that it can be easily captured and shared throughout your teams. After all, an insight that may not mean much to your HR rep may mean something significant to one of your developers. Tear down those (data) walls Data exit interviews are a quick short-term fix, but long-term improvement and risk mitigation can only be achieved by tearing down all the data silos in your business. As mentioned above, culture, technology and procedures all play a hand in erecting data walls within your company and all three areas need to be tackled. Going into a full de-siloing strategy would take a separate lengthy article — so I’ll quickly summarise some of the key steps that apply to most businesses: Audit your tech to determine the best tech stack — as we discussed regarding data management — and expand your audit to focus on all your systems and how they interact. API-driven, flexible, cloud-based solutions that can scale with your needs and speak to each other are often the best choice. Do not immediately go for a mainstream monolithic stack that may lock you into more than you need or won’t flex with your needs. Do also be very cautious about building your own solution or attempting to bend existing systems. Audit your data — either with the help of outside experts or your own data team begin tracking down where all your information is, the systems that are used to collect it, the people who are in charge of managing and analysing it, how it is updated and cleaned and crucially, what it is used (or not used) for. Review your policies — how appropriate are your data governance, ethics and utilisation procedures? The key is to ask how you can make data a part of your entire company’s decision making policy. Everything from major strategy decisions to the day-to-day actions of every team member. The final step comes hand in hand with arguably one of the most important changes you can make to both protect and enhance your data knowledge — education. Upskilling, education and training are the answers to the data question In many companies, using and understanding data is the responsibility of only a handful of people. Not only does this exacerbate the risk of data loss when people leave, it’s also the single biggest impediment to a company becoming truly data-driven. Insights have lower value, staff members are not empowered to make their own decisions and innovation is limited to a select few “power users.” It can create bottlenecks and, if senior leaders are not properly skilled, can lead to bad company-wide decision making. Approaching this issue doesn’t mean creating a whole team of data scientists. Training and upskilling is a broad spectrum and should be tailored to individual team members and departments. The first step is to make everyone understand the basics of data analysis and statistics so they can interrogate their own data and properly scrutinize insights. Next, it’s about giving your team the tools they need to maximize the role of data in their careers. Technology, managers and procedures should all enable and support this process. Simply training people once, then expecting them to become immediately data-driven isn’t going to work. It requires a mindset change where every action or decision should, if possible, be underpinned by data. Making it a habitual part of your business by, for example, including data skills in how you assess pay and promotion, helps to engrain it into your company culture. Take it slowly but recognize the need to act If you’ve got this far you’ve probably realized that the risk of data loss is a symptom of much bigger issues that companies need to grapple with. The more exposed your company is towards losing vital information from departures, the more likely it is that your approach to data is not fit for purpose. But do not despair. Recognizing and committing to tackling the risks is a great first step and the rewards you will reap will go far beyond protecting information. The key is to plan your approach and pilot your projects. Don’t run in and tear out all your data infrastructure or send your entire team on an intensive data training course. Start with what you can fix now and then experiment with your approach and carefully assess the ROI. It is much easier to take on bigger challenges if you know what does and doesn’t work for your company and you’ve already seen gains from smaller initiatives. Becoming data-driven is not so much a journey with a fixed goal but rather a mindset shift underpinned by constant evolution and innovation. Natalie Cramp is the CEO of Profusion. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,061
2,022
"Microsoft: Data wiper cyberattacks continuing in Ukraine | VentureBeat"
"https://venturebeat.com/2022/03/02/microsoft-data-wiper-cyberattacks-continuing-in-ukraine"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft: Data wiper cyberattacks continuing in Ukraine Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft warned that the group behind the “HermeticWiper” cyberattacks — a series of data-wiping malware attacks that struck numerous Ukrainian organizations on February 23 — remains an ongoing threat. The warning came as part of an update published today by Microsoft on cyberattack activity that the company has been tracking in Ukraine. The update largely compiles and clarifies details on a series of previously reported wiper attacks that have struck Ukrainian government and civilian organizations over the past week. But the update also implies that additional wiper attacks have been observed that are not being disclosed for now. In particular, Microsoft indicates that as of now, “there continues to be a risk” from the threat actor behind the HermeticWiper attacks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The string of wiper cyberattacks have coincided with Russia’s unprovoked troop build-up, invasion and deadly assault on its neighbor Ukraine. Russia is not mentioned in the Microsoft Security Response Center (MSRC) blog update today. The MSRC update also follows a blog post from Microsoft president Brad Smith on Monday, in which he stated that some recent cyberattacks against civilian targets in Ukraine “raise serious concerns under the Geneva Convention. ” HermeticWiper For starters, the MSRC blog update clarifies a point of confusion: The wiper malware that has been dubbed HermeticWiper by other researchers is, in fact, the same malware as the wiper that Smith referred to as “FoxBlade” in his Monday blog post. The initial HermeticWiper/FoxBlade attacks struck organizations “predominately located in or with a nexus to Ukraine” on February 23, Microsoft said in the blog. Other researchers have noted that the HermeticWiper struck Ukrainian organizations several hours before Russia’s invasion of Ukraine. The HermeticWiper attacks affected “hundreds of systems spanning multiple government, information technology, financial sector and energy organizations,” Microsoft said. Most concerning, however, is Microsoft’s apparent revelation that the HermeticWiper cyberattacks did not stop on February 23. While the company did not provide specifics, Microsoft appears to be describing an ongoing risk from the threat actor behind the HermeticWiper/FoxBlade attacks. “Microsoft assesses that there continues to be a risk for destructive activity from this group, as we have observed follow-on intrusions since February 23 involving these malicious capabilities,” the company said in the blog post update. VentureBeat has contacted Microsoft to ask if the company can specify on what dates it has observed the other attacks involving HermeticWiper/FoxBlade, and what the date was of the most recent attack involving that wiper malware. Microsoft did not provide any attribution for the HermeticWiper/FoxBlade cyberattacks, saying that the company “has not linked [the wiper malware] to a previously known threat activity group.” In the wake of the wiper attacks such as HermeticWiper, the FBI and the federal Cybersecurity and Infrastructure Security Agency (CISA) several days ago issued a warning about the possibility that wiper malware observed in Ukraine might end up impacting organizations outside the country. “Further disruptive cyberattacks against organizations in Ukraine are likely to occur and may unintentionally spill over to organizations in other countries,” CISA and the FBI said in the advisory. Other wipers In the blog post update today, Microsoft said it’s also tracking two other strains of malware associated with this threat actor behind HermeticWiper. Those malware families were identified Tuesday by researchers at ESET — “HermeticWizard,” described by ESET as a worm used for spreading HermeticWiper, and “HermeticRansom,” a form of decoy ransomware. (Microsoft is referring to HermeticRansom by the name “SonicVote,” and is putting HermeticWizard underneath the FoxBlade umbrella in its naming scheme). The MSRC blog update adds that Microsoft is aware of the wiper malware that has been named “ IsaacWiper ” by ESET researchers, and that was first disclosed by ESET on Tuesday. IsaacWiper — which Microsoft is referring to by the name “Lasainraw” — is a “limited destructive malware attack,” the blog update says. In terms of IsaacWiper/Lasainraw, “Microsoft is continuing to investigate this incident and has not currently linked it to known threat activity,” the blog says. As alluded to in the section on HermeticWiper, Microsoft characterizes the overall wiper activity in Ukraine as ongoing. The blog update notes that Microsoft “continues to observe destructive malware attacks impacting organizations in Ukraine.” VentureBeat has reached out to Microsoft to ask if this means that the company has observed other recent wiper attacks in Ukraine, beyond the ones that are listed in the blog. VentureBeat has also asked if Microsoft can say when the last wiper attack occurred in Ukraine that it has observed. All in all, with the wiper cyberattacks in Ukraine, “we assess the intended objective of these attacks is the disruption, degradation and destruction of targeted resources,” the updated Microsoft post says. Targeted attacks The mention of the attack being “targeted” at certain resources echoes what Smith said in his post on Monday, when he stated that “recent and ongoing cyberattacks [in Ukraine] have been precisely targeted. He noted that the use of “indiscriminate malware technology,” such as in the NotPetya attacks of 2017, has not been observed so far. The MSRC blog update does not appear to mention several recent cyberattacks in Ukraine that Smith alluded to in his Monday post. Smith, for instance, mentioned recent cyberattacks in Ukraine against the “agriculture sector, emergency response services [and] humanitarian aid efforts.” The MSRC blog does not appear to provide details on those cyberattack incidents, since there’s no direct mention of any of those targets being affected by any of the attacks discussed in the post. The post does note that the “WhisperGate” attack on January 13 — the first in this series of destructive malware attacks against Ukrainian organizations — did affect some non-profit organizations in Ukraine. Microsoft does not specifically attribute any of the attacks in the blog update, saying only that “some of these threats are assessed to be more closely tied to nation-state interests, while others seem to be more opportunistically attempting to take advantage of events surrounding the conflict.” “We have observed attacks reusing components of known malware that are frequently covered by existing detections, while others have used customized malware for which Microsoft has built new comprehensive protections,” the company said in the update. Citing a well-known expert on cyberattacks , The Washington Post and VentureBeat reported Sunday that data-wiping malware had struck a Ukraine border control station in prior days. The wiper attack forced border agents to process refugees fleeing the country with pencil and paper, and contributed to long waits for crossing into Romania, according to the expert, HypaSec CEO Chris Kubecka. The cyberattack on the Ukraine border control station was first reported by the Washington Post. The State Border Guard Service of Ukraine and the Security Service of Ukraine have not responded to email messages inquiring about the attack. In his blog post Monday, in saying that some recent Ukraine cyberattacks “… raise serious concerns under the Geneva Convention,” Smith referenced the international treaty that defines what are commonly referred to as “war crimes.” The Ukrainian government is a customer of Microsoft, and so are “many other organizations” in Ukraine, he noted in the blog. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,062
2,023
"As Russia's Cyberattacks on Ukraine Mount, the Risk of Impact in Other Countries Rises - CNET"
"https://www.cnet.com/tech/services-and-software/as-russias-cyberattacks-on-ukraine-mount-the-risk-of-fallout-in-other-countries-rises"
"Black Friday 2023 Live Blog Can You Trust AI Photography? Best TV for 2023 Thanksgiving Travel Times Snoozing Is Fine Solar EV charging 6 Best TV Gifts Tech Money Home Wellness Home Internet Energy Deals Sleep Price Finder more News Privacy As Russia's Cyberattacks on Ukraine Mount, the Risk of Impact in Other Countries Rises Cyberattacks are a part of Russia's military strategy. Bree Fowler Senior Writer Expertise cybersecurity, digital privacy, IoT, consumer tech, smartphones, wearables Bree Fowler Feb. 17, 2022 6:10 p.m. PT 4 min read Cyberattacks against Ukraine could have a long reach. Getty Russia could ramp up cyberattacks against Ukraine in an effort to destabilize its government and economy, security experts warn, an online assault that potentially might spread to other countries, including the US. War in Ukraine In recent weeks, the Russian government is believed to have initiated a handful of cyberattacks against Ukraine. Last month, hacker groups linked to Russia's intelligence services were blamed for a cyberattack that defaced dozens of Ukrainian government sites with a message warning the country to "be afraid and expect the worst." Days later, Microsoft said it had identified dozens of computer networks at Ukrainian government agencies and organizations infected with destructive malware disguised as ransomware. On Feb. 16, the New York Police Department warned that Russian or pro-Russian criminal threat actors could launch cyberattacks on infrastructure, government entities, and local law enforcement according to CBS News. Cybersecurity experts say the attacks could be a precursor to more serious cyberassaults on Ukraine, which Russia is determined to prevent from joining the NATO security alliance. Russia has amassed more than 100,000 troops on Ukraine's border, raising concerns Moscow may be preparing for an invasion of its neighbor. Russia annexed a portion of Ukraine in 2014. The Russian troop buildup has prompted a flurry of diplomatic activity aimed at defusing tension. So far, those efforts haven't been successful. US intelligence officials said in early February that they had evidence that Russia was planning to create a video that will depict a fake attack on its troops that could be used as a pretext to invade Ukraine. Days later, President Joe Biden urged Americans in Ukraine to leave rather than risk getting caught in a potential invasion. If Russia does invade, it will undoubtedly employ more cyberattacks as part of its military strategy, researchers say. Read More: Russia Invades Ukraine: What to Know What Russia's Ukraine Invasion Means for the US Economy and More Reliable Twitter Accounts to Follow What's Happening in Ukraine Adam Meyers, senior vice president of intelligence at CrowdStrike, says the current round of cyberattacks on Ukraine could indicate Russia is refining its cyber capabilities. Russia's game plan with online attacks, he says, is to create chaos and inflame tensions between the two countries. "From what we've seen in Ukraine historically, it's almost been a laboratory of experimentation for Russia," Meyers said. He pointed to the NotPetya attack, which crippled computers across Ukraine in 2017. The malware locked up files like criminal ransomware would. When experts took a closer look, however, they realized that its true purpose was to destroy data rather than make money. NotPetya did what it was intended to do -- wreak havoc in Ukraine. It also spread to unintended targets far outside of that country, shutting down companies including FedEx, Merck, Cadbury and AP Moller-Maersk. The most recent malware attacks against Ukrainian targets, dubbed WhisperGate , also appear to be bent on destruction rather than making money, Meyers said. Of course, cyberattacks will only be part of a broader campaign if Russia chooses to invade Ukraine, with malware and online disinformation being among the many weapons the country could use. Quentin Hodgson, a senior international and defense researcher at the Rand Corporation focusing on cybersecurity, said Russia's cyberoperations are unique because they aren't clearly separated from conventional military and intelligence operations as they are in other countries, including the US. Still, Russia will likely lean on old-school military muscle to grab the attention of the Ukrainian people, he says "At the end of the day, they're still massing troops on the border," Hodgson said. "That's sending a signal that cyber can't." According to a memo obtained by CNN in January, the Department of Homeland Security warned operators of US critical infrastructure, along with state and local governments, that Russia could launch a cyberattack on US targets if it feels its long-term security is threatened by a NATO or US response to what's going on in Ukraine. CrowdStrike's Meyers said he thinks it's unlikely Russia would intentionally provoke the US with a state-sponsored attack against an American target. But US companies with a presence in Ukraine, such as hotel chains, along with international aid groups and think tanks, might have something to worry about. Russia also could just look the other way, as it has for many years, when the cybercrime gangs known to run rampant within its borders go after US targets. While Russian government arrests of known ransomware gang members and other cybercriminals have grabbed headlines recently, President Vladimir Putin hasn't historically been much help in bringing cybercriminals that target the West to justice, says James Turgal, former executive assistant director for the FBI's information and technology branch. Turgal, who now serves as vice president of cyber risk, strategy and board relations for Optiv Security, says last year's ransomware attacks against Colonial Pipeline and meat processor JBS USA should serve as wakeup calls to all companies, especially those that can be considered critical infrastructure. Both those attacks were attributed to cybercriminals in Russia. The NotPetya attack was a perfect example of how cyberattacks can affect countries far away from the conflict, he said. "Whether it's intentional or not, whether you're a particular target or just collateral damage," Turgal said, "the threat is real." More From CNET Deals Reviews Best Products Gift Guide Shopping Extension Videos Software Downloads About About CNET Newsletters Sitemap Careers Policies Cookie Settings Help Center Licensing Privacy Policy Terms of Use Do Not Sell or Share My Personal Information instagram youtube tiktok facebook twitter flipboard "
15,063
2,022
"We need DeFi regeneration: Examining the systemic future of finance | VentureBeat"
"https://venturebeat.com/2022/03/10/we-need-defi-regeneration-examining-the-systemic-future-of-finance"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest We need DeFi regeneration: Examining the systemic future of finance Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. By Shaun Conway, Ph.D. , founder of IXO You have probably heard the hype around decentralized finance or DeFi — the future of financial services and the next frontier for the cryptocurrency space. Major global institutions are placing bets on DeFi — Goldman Sachs , Morgan Stanley , BNY Mellon , JPMorgan , BlackRock , you name it, they’re putting crypto on their balance sheets, offering digital asset trading and custody. DeFi transaction volume surged by 912% in 2021, according to Chainalysis , of which transactions involving illicit addresses accounted for an all-time low of just 0.15%. A positive trend, no doubt. But speaking of placing bets, there is another side of DeFi that is arguably both helping and hindering this financial revolution — the degenerates. A crypto subculture known as DeFi Degens refers to a subculture that congregates in private Telegram and Discord channels sharing information about new DeFi projects through which they think they can generate profits…or simply have fun, in a financial economy space that has very little connection with what happens in the real-world economy. For many degens, decentralized finance is an opportunity to develop projects that mimic financial derivative trading mechanisms found in the traditional financial economy, combined with massive multiplayer online games. Participants are offered abstruse and unpredictable tokens that have many of the characteristics of casino chips in gambling venues. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Blurring the lines between jokes and scams, and playing on the increased meme-ification of finance to lure in tribes of investors, DeFi projects are attracting a slew of trading speculation. These games use hyperinflationary token supply mechanisms to draw in new investors, with the promise of high rewards, by offering liquidity farming or staking yields that are based on Ponzi-scheme dynamics. The yields only last as long as there are new participants joining the game and staking their assets. In essence, this is a house of cards built on unsustainable inflationary mechanisms, with inherent risks to users who are not versed in these increasingly niche and complicated financial schemes. Long before people on r/wallstreetbets declared their love for stonks, DeFi Degens on other subreddits, as well as on Telegram and Twitter, had been shouting their misspelled motto, “ HODL. ” That is, hold — buy a cryptocurrency — from Bitcoin to the latest coin — and keep it in your wallet in the hope that its value shoots up again, to the moon. In 2021, a whole new breed of unabashedly self-styled “meme-coins” emerged, such as Shiba-Inu and many other imitators of the Doge meme-coin that has famously been punted by Elon Musk. These meme-coins are generic crypto tokens that often have no technical edge on first-wave assets like Bitcoin or Ethereum but have swapped those cryptocurrencies’ ominous and technically sounding vibe for wacky logos and playful names. Now that crypto has officially entered the teen stage of development, how do the DeFi Degens contend with adolescence and play a responsible role in the real world? Degens now have the opportunity, and moral imperative, to evolve into ‘Regens’ or regenerates, whose participation in the crypto economy can have a positive impact on the world by making investments and taking actions that have relevance to the real economy. Since the 2008 global financial crisis, there has been a growing movement of people joining the crypto space to make a difference, with ambitions to rebuild the broken political-economic institutions and traditional financial systems that have got the world into such a mess, with the hope of creating more sustainable, transparent and accountable decentralized financial systems. This aspiration seems to have mostly been corrupted by DeFi schemes that have replicated and even amplified the greed-driven and unsustainable characteristics of the old system. Crypto protocols provide the building blocks for social institutions equipped to take on the challenges of today’s networked culture. The Degens now need to refocus their ambitions beyond meme coins, bored apes, fat penguins, and pet rocks and look for impactful ways to spend their finances, time, attention, and energy. So much of what is built-in crypto today has become self-referential and self-serving money games, concepts fundamentally indexed to profit and motivated by greed. This wave of digital disruption provides the historic opportunity for us to fundamentally reshape the purpose of finance in ways that will have transformative impacts on people and the planet. We can empower ordinary people as consumers, savers, lenders, borrowers, investors, and taxpayers, to be at the center of this economic regeneration. We need to use a regenerative mindset to figure out how to use these tools for real-world impact and start to address the major problems we face around climate change and global inequity. We’ve had our fun with crypto and the DeFi degenerative moment. Now it’s time to take all our learnings and apply what we know to regeneration that can start to heal the earth and society. It is time to focus on projects that address real-world socially impactful outcomes. Crypto and DeFi trading have the potential to draw billions of dollars into carbon markets and leapfrog slow-moving diplomatic negotiations to drive carbon price discovery and cross-exchange carbon trading. Let’s draw inspiration from Kim Stanley Robinson’s 2020 climate-themed thriller “The Ministry for the Future,” which described a global “carbon coin” to pay for decarbonization, including by paying off oil companies to keep fossil fuels in the ground. Already a movement is burgeoning in the industry. The Interchain Foundation’s Earth Program’s mission is to fund and build the trusted technologies, sovereign networks, and financial innovations that are needed to enable communities to prosper, regenerate the planet, and adapt to the climate crisis. Projects like the Regen Network address broken economic models that incentivize the degradation of land, destruction of ecosystems, fuel climate change. The launch of the KlimaDAO was also a huge signal of intent for a collective organization in support of shrinking the available pool of carbon credits to make carbon-offsetting projects more profitable, and gained a tonne of attention early on. Grassroots Economics has already created Community Inclusion Currencies (CICs) , digital currencies that enable communities to create a token based on their future production of goods and services, so far, used by over 56,000 beneficiaries in East Africa. The crypto community ostensibly holds an ethos of liberalism, where “ decentralization ” frequently stands for community self-sovereignty. With the boom in wealth creation, there is ample opportunity to build technological solutions that serve local communities in ways that are sustainable and regenerative. But in practice, little space has been made for different values to be discussed or enacted. This is why, in the absence of coordinated mechanisms to enact our shared values, we default to the lowest common denominator: profit. Indeed, because of this movement, vast swathes of the existing financial system are currently being rebuilt on more transparent and accessible infrastructure. But, there’s a real risk if we don’t clean it up and bring it to something impactful. It’ll get completely co-opted by land-grabbing VCs, big institutions, banks, or the likes of Meta. The time is now to build technology that allows structures to demonstrate real economic well-being, improve quality of life, and respond to the biggest crises that are now facing humanity and the planet, due to climate change and species extinctions. A sustainable and regenerative future can only be achieved by working together to build, promote, and deploy at Internet scale, the innovative solutions that overcome the outdated and ineffective ways that we think about and deal with economic, environmental, and social development. Failure to act would be a wasted opportunity and risks diverging from the needs of citizens for inclusive sustainable development. Shaun Conway, Ph.D. is the founder of IXO DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,064
2,022
"Why NFTs should be seen as worldbuilding opportunities for brands | VentureBeat"
"https://venturebeat.com/2022/04/19/why-nfts-should-be-seen-as-worldbuilding-opportunities-for-brands"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Why NFTs should be seen as worldbuilding opportunities for brands Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Brands planning to sell NFTs in 2022 face stiff competition and rising holder expectations. Some successful NFT projects like World of Women offer utility-focused NFTs to holders with real-world value. Some have partnered with motion designers, musicians or 3D artists to stand out. Others write stories where NFTs are characters and a community votes on their “journeys.” A select few make cartoons to keep holders engaged. Companies and groups have already gone beyond traditional monetary incentives to attract Web3 developers in today’s tight job market. All of those NFT projects share one thing: none are tied to a legacy brand or recognized IP. The “why” is clear to crypto art enthusiasts, Discord community leads, brand strategists and storytellers — brands aren’t creating a presence. This key element to creating stickiness has been ignored as big consumer brands, towed by their PR teams, sprint to the expectation of NFT windfalls. But they forget to bring the Web3 picks and shovels. Doing the required NFT homework Tactful marketers see beyond the hype around NFTs and know there are more use cases for the technology beyond art and collectibles. Savvy marketers are planning NFT debuts by studying how current projects flourish, market, promote themselves and organically grow without logos or sponsorship money. These strategy-first marketers view Discord servers as classrooms to absorb how token holders and non-holders interact to help each other. They’re also searching for creative and behavioral trends that trigger certain NFT projects to boom. Another prerequisite is considering which blockchain community fits your project. This is done by joining Telegram groups connected to these disparate groups. For instance, Ethereum is often used by artists for one-of-one drops or small collections, while scalable offerings like Polygon and Solana are most adopted by gaming communities. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! A critical consideration is the ecological impact or adopting a blockchain that uses proof-of-work or proof-of-stake consensus. Blockchains such as Solana or the sidechain Polygon offer more eco-friendly transactions due to the technology behind their systems — proof-of-history and proof-of-stake, respectively. The next “NFT Super Mario flag” for marketers is determining what their target audience knows about NFTs. Customer interviews via forms and live Q&As work well here. This step is needed before mapping out probable short and long-term outcomes resulting from brands using NFTs in their 2022 marketing strategies. Doing things that previously worked won’t work I’ve been on calls where reporters asked brands what utility or quantifiable value their NFTs grant. PR teams can no longer sprinkle NFT pixie dust on announcements to make companies or organizations look like first-movers. Just months ago, press releases were enough to score media coverage. But not anymore. NFT holders and the media have raised their standards, forcing marketers to step up their offerings. While crypto exchange Coinbase has upwards of 56 million active user accounts , we’re still at the innovator’s phase of the five-stage technology adoption life cycle created by Everett Rogers. Global media outlets, art magazines, trade publications, and analyst firms are rushing to hire new resources to cover Web3 because they see things like generative NFT artwork as more than a fad. They recognize NFTs made a dot on the art history timeline. On the other hand, brands have mostly perceived batches of generative art as only a new digital revenue stream or novelty, not new channels to tell their larger story. The saying “marketers ruin everything” exists for a reason. When something’s cool, it doesn’t take long for marketers to appear and siphon dollars from it. The bulk of branded NFT collections has brought us to this uncool-Uncle-does-TikTok-dance-videos moment. The interesting projects are outliers, like the adventure-based Louis Vuitton game where 30 NFTs designed by artist Beeple are discoverable. NFTs = another form of worldbuilding At its core, an NFT is a ticket or passport to a show or world. NFTs are one-of-a-kind digital or virtual assets that can take on any use applied to them. To me, anyone selling an NFT creates a world, whether they know it or not. Whether they planned to or not. What brands should learn from thriving NFT communities is that NFTs are Web3 access keys to new worlds. In these worlds, brands add a new layer to customer relationships. When marketers grasp that NFTs are passports, their a-ha moment arrives. This understanding unlocks the limitless possibilities they can offer their brand culture and customers. These NFT-led worlds should be treated — and created — with the same level of care and attention to detail that physical stores, websites, concerts, theme parks, and movies receive. Just as the Apple store immerses us in an environment, NFTs allow a new layer of connection to be established between a seller and buyer. The best fictional worlds — Marvel, Dungeons & Dragons, Star Wars — are nourished by stories, shared values, and fandom. Brands have forgotten to bring strategic storytelling to their hurried NFT releases. How can this be when storytelling is always the starting point for worlds from artists, brands, and franchises like Guillermo del Toro, Jay-Z, Barbie, The Beatles, Spongebob Squarepants, James Bond and LEGO? Portal to success How brands approach the promotion, launch and sale of NFTs will determine if they’re worth anything. The evergreen worlds their NFTs can access will be the key to earning community trust which will ultimately lead to sustainable success. Benjamin Jackendoff (a.k.a. B. Earl ) is a Marvel writer and partner at Skyview Way Studios DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,065
2,022
"How NFTs create new opportunities in the entertainment industry | VentureBeat"
"https://venturebeat.com/2022/03/08/how-nfts-create-new-opportunities-in-the-entertainment-industry"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How NFTs create new opportunities in the entertainment industry Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Earl Flormata, CMO of Mogul Productions. The non-fungible token (NFT) community Arabian Camels is making a $50 million Hollywood film called “Antara.” The movie makers recently announced a highly anticipated NFT drop for the project, which allows holders to partly fund the movie, own a share of digital rights and benefit from its box office achievements. This project is just a glimpse into what NFTs could achieve in the world of entertainment. NFTs started out as standalone pieces of art and memes by individual creators and today is a massive industry that’s swarming with innovative NFT projects today. In 2021 alone, an estimated $41 billion worth of NFTs were sold, which is a testament to their growing popularity and value. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Major industries like art and gaming have benefited from NFTs already, and recent developments suggest that the realm of entertainment could be next in line to receive a transformative blow from the NFT industry. Integrating NFTs into the movie business Worth over $2.2 trillion , the global entertainment industry isn’t only one of the most valuable industries in the world but also a huge part of our culture. Over the span of a century, the industry has never failed to amaze its audience with enticing movies, TV shows, music, and it has stood the test of time, staying relevant even today. The entertainment industry has historically been quick to adapt to technological advancements, whether with the advent of television sets back in the day or streaming services of the 21st century, and it uses these technological advancements to its advantage to keep viewers hooked. So, it’s only natural that when a revolutionary technology like NFTs presents itself, the industry embraces it. The opportunities this brings to the table are truly groundbreaking. Transforming the entertainment industry NFTs in entertainment have the potential to completely transform the way films are made, produced, and distributed, democratizing this unilateral industry in the process. To understand this better, we go back to Arabian Camels’ Antara project. This big-budget Hollywood epic is one of the first films to be produced by the NFT community, demonstrating the concept of movie NFTs. For the makers and producers of films, this means that they’re no longer limited by budget constraints or the difficult financing methods of top production houses. Movie NFTs provide a way for them to distribute part ownership of the movie to its viewers, raising the required funds in the process. This is especially useful for budding filmmakers and artists who are hardly visible in the industry. Specific roles like “producer” can be represented and offered as an NFT, making its owner the producer of the film. And through all the stages of filmmaking, NFTs can be offered as a way to include the community in the decision-making process, market the project and build a loyal fanbase beforehand. For the elite production houses and large film franchises with millions of followers, however, NFTs provide a unique opportunity to solidify their fandom in the metaverse. Multiple production studios like Disney are already working on NFT marketplaces and in a recent development, Lionsgate signed a strategic partnership with the NFT platform Autograph to create NFTs for huge franchises like Mad Men, John Wick, and The Hunger Games. “NFTs present a tremendous opportunity for mixed-reality world-building experiences, deepening user engagement and interaction and fostering a community for our hundreds of millions of global consumers to create one-of-a-kind digital collections and Autograph is the optimal destination for this discovery,” claims Jenefer Brown , Executive VP & Head of Lionsgate Global. The NFTs to be released could range anywhere from character avatars to video clips from the films. Now, in the case of emerging artists, technicians, musicians and directors, NFTs could be their on-ramp to popularity within the industry. The fact that Ben Mauro , a Hollywood concept artist, earned more in seven minutes with his NFT collection than he has in the 12 years of working in Hollywood is telling. Just like Mauro, these emerging artists could use NFTs in unique ways to represent their talent, garner an audience and increase their visibility in the industry. Movie merchandising is another realm where NFTs in entertainment could make a huge difference. Fans have long been collecting posters, clothing and character figures from their favorite films, and NFTs add a new dimension to this. The rarity of NFTs gives fans the satisfaction of owning a part of the film that cannot be replicated. Celebrities can also make the most of this opportunity by creating NFT merchandise collections for their fanbase. This particular trend has picked up quite recently and celebrities like Eminem, Justin Beiber, SnoopDogg and Jimmy Fallon have all followed suit. While these are the current use-case scenarios, Anndy Lian , CEO of BigOne exchange, says that NFTs could soon be embedded into movies, making them interactive and giving users something to hunt for. If we look at the whole picture, the integration of NFTs into the entertainment industry allows users to actively participate every step of the way. Both viewers and creators have the chance to connect beyond the screen and thus take the industry to unimaginable dimensions. How NFTs open closed doors Since its inception, the entertainment industry has been opaque in its operations, shutting down external interference. This stringent way of operation vastly limited opportunities for emerging artists trying to make it in the industry. But with the NFT integration, the entertainment industry is now opening its door to a wider section of artists, producers, and viewers. While producers and creators have new ways of monetizing their work and raising funds, viewers now have a way to directly participate in the industry’s happenings, not only to fund films but also earn from their gain. This symbiotic relationship between viewers and creators could change the face of global entertainment in the years to come. Earl Flormata is the CMO of Mogul Productions. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,066
2,021
"Google's health care data-sharing partnership is a problem | VentureBeat"
"https://venturebeat.com/2021/06/20/googles-health-care-data-sharing-partnership-is-a-problem"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Google’s health care data-sharing partnership is a problem Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. On May 26, Google and HCA Healthcare, a national hospital chain, announced a data sharing partnership that will provide the internet giant with access to a host of patient records and real-time medical information. But what is being cast by both companies as a win for improved patient treatments and outcomes is hardly a victory for consumers. Google has a dark history of exploiting personal data for profit. Going back to at least Project Nightingale , Google has collected and monetized sensitive patient data from millions of Americans. The HCA agreement will put an enormous quantity of new patient data into Google’s hands, and some have already pointed out how similar the HCA agreement looks to data sharing arrangement that powered Project Nightingale. But while the new HCA deal poses a major threat to the privacy of consumer’s health data , Washington D.C.’s attention has been elsewhere when it comes to data security. Since the deal was first announced, America has had to face a rising tide of ransomware crime. Data privacy laws have taken a back seat to the fight against ransomware attacks , and American consumers are being left to fend for themselves. Our national discourse simply isn’t taking this new threat to health data privacy seriously enough. It is as if many do not perceive that threat at all. But these data sharing agreements aren’t innocent. And we need to raise the level of awareness of the real risks to act as a catalyst for greater regulatory scrutiny of such efforts. Data sharing agreements between large corporations offer opportunities to better understand trends in patient outcomes and subsequently improve decision making for patient care. As HCA’s chief medical officer stated , the new agreement is designed to create a “central nervous system to help interpret the various signals” of patient data. This might seem like enough of a benefit to override any other concerns, but that is because those other concerns have gone largely unexamined. Consider for a moment just what kinds of risks to patient data already exist, and the concerns consumers have about who has their data and how it is used. For example, in 2019 alone, 41.2 million healthcare records were exposed, stolen or illegally disclosed in 505 healthcare data breaches, exposing millions of individuals, as well as businesses, to the risk of having protected health information misused. It should be clear enough, then, that aggregating a number of medical records in a single entity increases the risk of exposure to illicit data access for a large number of individuals. Privacy concerns are not just related to the fact that stolen data could potentially harm patients and consumers, however. They are also tied to the simple reality that individuals feel as though they have no say in how their personal data is acquired, stored, and used by entities with which they have not meaningfully consented to share their information. According to the Pew Research Foundation , more than half of Americans have no clear understanding of how their data is used once it has been collected, and some 80% are concerned about how much of their data advertisers and other social media companies have collected. Generally speaking, consumers do not have a firm grasp of how their information is used, which inhibits their ability to make informed decisions about who can access their data and how they can use it. Similar research finds that consumers feel powerless in the age of big data: Three-quarters of Americans say they have little control over the personal information collected on them, and almost nine in 10 are very concerned about their privacy when using free online tools like Facebook and Google. In other words, consumers do not believe they have much of a say when it comes to their own data privacy—and they are right. The legitimate concerns of consumers combined with a massive and growing amount of data theft make agreements like the one between Google and HCA unwise, despite potential benefits. While the data that Google will have access to will be anonymized and secured through Google’s Cloud infrastructure, it will be stored without the consent of patients, whose deeply personal information is in question. This is because privacy laws in the United States allow hospitals to share patient information with contractors and researchers even when patients have not consented. Even when information is anonymized, taking away patients’ control over access to their own information in such a way is a deeply troubling act, no matter the potential health benefits. Privacy concerns are often overlooked because patients and consumers do not feel appropriately equipped to safeguard their own information. And when companies can share private information without their even knowing it, how could they? It is high time for companies to prioritize the privacy of patients, and to recognize the growing threat to autonomy represented by the aggregation and sharing of large swaths of data. While hospitals may be able to improve care with a raft of new information, leaders in these fields, and the general public, need to start asking tougher questions about how data is acquired and used, and shine a brighter light on the wisdom of sharing such information with a company that is the in the business of monetizing consumer data. A more even balance needs to be struck between innovation and privacy — and the agreement between Google and HCA will only make this aim harder to achieve. Tom Kelly is president and CEO of IDX , a Portland, Oregon-based provider of data breach and consumer privacy services such as IDX Privacy. He is a Silicon Valley serial entrepreneur and an expert in cybersecurity technologies. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "